TechCrunch Sessions: AI side event applications close tonight for Berkeley AI Week
If you want to host a side event during TechCrunch Sessions: AI Week, the deadline is tonight at 11:59 p.m. PT. There’s no application fee, and events run during the conference week of June 1 to June 7 in Berkeley. On paper, this is standard conferen...
TechCrunch Sessions: AI side event deadline lands tonight. That matters for developer teams
If you want to host a side event during TechCrunch Sessions: AI Week, the deadline is tonight at 11:59 p.m. PT. There’s no application fee, and events run during the conference week of June 1 to June 7 in Berkeley.
On paper, this is standard conference promotion. In practice, side events are one of the few conference formats that can still be useful for technical teams. You get a room, a schedule, and a better filter on who bothers to show up.
That matters if you’re selling infrastructure, developer tooling, applied AI products, or anything else where a buyer needs technical confidence before a sales call goes anywhere.
Why technical teams may care
TechCrunch is marketing these side events to founders, developer advocates, and data science leads, with the usual promise of exposure to the 1,000-plus investors, builders, and industry people expected in Berkeley. Fine. The practical appeal is simpler.
You control the format.
If you’re showing an LLM evaluation stack, a retrieval pipeline, a browser-based annotation workflow, or an SDK for agent orchestration, the main stage is usually a bad fit. It’s too broad. A side event lets you narrow the room and tighten the subject.
That changes the conversation.
A workshop on streaming inference will attract people dealing with latency, cost, and observability. A session on fine-tuning open models pulls a different crowd than a panel on AI safety or product strategy. Same conference orbit, very different audience quality.
That’s why platform companies keep doing this. OpenAI, Anthropic, Hugging Face, LangChain, Vercel, and plenty of infra startups have all learned the same thing over the past two years. Developers respond to concrete work, not slogans and a standing table.
Use it for product feedback
The source material leans on thought leadership. That’s conference copy. The stronger use case is product feedback in a setting where people can actually pay attention.
A well-run side event can do three useful things fast:
- show the product without rushing through it
- watch technical users hit the rough edges
- see whether your pitch matches the problem they actually have
That’s worth more than a pile of badge scans.
For early-stage teams, this can double as lightweight user research. For bigger companies, it’s a good way to test a new API, model endpoint, agent framework, or developer SDK before a wider rollout. A side event with 30 solid engineers can be more valuable than a sponsored keynote nobody remembers by Wednesday.
There’s also a downside. If your product needs too much setup, too much context, or too much hand-holding, a live event will expose that quickly. Sometimes brutally.
Useful information either way.
Keep the web stack boring
The source material suggests a quick microsite built with Next.js or Gatsby, plus RSVPs through Eventbrite or Google Forms. That’s sensible. This should not become an internal platform project.
For an event page, you want:
- a fast static page
- clear agenda, time, location, and capacity
- a registration form that works on mobile
- a dead simple confirmation flow
- basic analytics, without surveillance nonsense
A static-generated page in Next.js is enough. You do not need custom auth, a fancy backend, or personalization logic. If engineers are spending real time building an event stack from scratch, the priorities are already wrong.
A minimal setup can look like this:
// pages/index.js
import React from 'react';
export default function EventPage() {
return (
<main>
<h1>Deep Dive: Scaling Transformer Models in Production</h1>
<p>June 6 · 10:00 AM · Zellerbach Hall Annex</p>
<a
href="https://www.eventbrite.com/e/scale-transformers-production-tickets"
target="_blank"
rel="noreferrer"
>
Register on Eventbrite
</a>
</main>
);
}
That example is plain, which is fine. The job here is reliability.
If you collect extra RSVP data such as current ML workload, preferred stack, or company stage, keep it tight. Short forms convert better and create less privacy risk. Don’t ask for five fields when two will do. Don’t collect data you won’t use. And if you’re running follow-up campaigns, treat attendee info like production user data. Functionally, that’s what it is.
Live demos fail in predictable ways
AI events keep hitting the same wall: live demos built on the assumption of stable Wi-Fi, stable cloud GPUs, stable APIs, and calm presenters.
That assumption rarely holds.
If your event depends on notebooks, inference calls, agent workflows, or browser dashboards, plan for something to break. The source material is right to recommend local Docker images, pre-recorded fallbacks, and seeded demo data. A few other rules help:
- Keep the demo path short.
- Cache whatever you can.
- Don’t rely on one hosted endpoint.
- Test on the venue network if possible.
- Have an offline version of the core workflow.
If the product only looks good under perfect conditions, the demo will tell on you.
Teams showing model-heavy applications should pay attention to bandwidth and latency. A room full of attendees hitting the same demo app can expose throughput problems you never saw in-house. Shared venue Wi-Fi, external API calls, and browser rendering are enough to turn a smooth product into a stuttering one.
That’s more than a demo issue. If the stack falls apart under a small burst of real use, it says something about production readiness.
Hybrid can work, but it gets messy fast
The source material mentions hybrid events, with an in-person session plus livestreaming. That can work, especially for distributed developer communities. It also goes bad very easily.
You’re running two events.
Remote attendees need clear audio, readable screens, a real moderator, and some way to participate. If the plan is a shaky camera at the back of the room, skip it. Record the session properly and publish it later.
For smaller teams, the safer options are usually:
- run the event in person and record it well
- run it remotely and skip venue complexity
- keep the hybrid layer minimal, with a simple stream and moderated Q&A
Anything beyond that needs staff and rehearsal. Conference weeks are chaotic enough already.
Narrow topics work better
A common mistake is trying to appeal to everyone in the AI crowd at once. That’s how you get vague sessions about “the future of intelligent applications” and a room full of polite boredom.
A tighter topic usually performs better. In Berkeley, these would likely draw the right audience:
- production patterns for RAG evaluation
- browser architecture for AI-powered dashboards
- cost controls for multi-model inference
- observability for agentic workflows
- post-training workflows for open-weight LLMs
- secure data handling for enterprise copilots
Those are concrete topics with real implementation pain. Senior engineers show up for that.
Format matters too. Panels are easy to schedule and often low-value unless the speakers are unusually candid. Workshops, architecture reviews, office hours, and live builds usually produce better technical discussion. Small-group roundtables can work too, if they’re focused and someone competent is running them.
Happy hours are fine for recruiting or partner intros. They’re weak for product depth.
Worth it for some teams, not all
There is upside here. The economics are still uneven.
A side event takes planning, venue coordination, staff time, and travel budget. If your team is tiny, the message is still fuzzy, or nobody on staff can run a room, this can become an expensive distraction.
Conference adjacency also has limits. People will already be in town, so you’ll get some ambient attention. That doesn’t mean the right people will walk in. “Investors, builders, and thought leaders” is a broad category. Senior infra engineers don’t appear just because the listing says AI.
Still, if you’re already going to TechCrunch Sessions: AI and you have something specific to show, a side event is often a better bet than passive sponsorship. You can run an actual working session instead of hoping the right people wander past your booth.
Keep the topic tight. Keep the stack simple. Rehearse the demo like it’s going to fail.
If you want a slot, the deadline is tonight.
Useful next reads and implementation paths
If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.
Build AI-backed products and internal tools around clear product and delivery constraints.
How analytics infrastructure reduced decision lag across teams.
TechCrunch is pushing a clear deadline: early bird pricing for TechCrunch Sessions: AI ends May 4 at 11:59 p.m. PT, with up to $210 off and 50% off a second ticket. The event is on June 5 at UC Berkeley’s Zellerbach Hall. That’s the promo. The agenda...
TechCrunch is making a last call for exhibitors at TechCrunch Sessions: AI, set for June 5 at UC Berkeley’s Zellerbach Hall. The deadline is 11:59 p.m. PT tonight. The pitch is obvious enough: get in front of 1,200-plus investors, founders, enterpris...
TechCrunch Disrupt 2025 is putting two parts of the AI market next to each other, and the pairing makes sense. One is Greenfield Partners with its “AI Disruptors 60” list, a snapshot of startups across AI infrastructure, applications, and go-to-marke...