Adam Hunter

Made with in NYC

AI In The Modern Tech Stack

12.5.24


ChatGPT was released almost exactly two years ago. One year later, it had become the fastest-growing consumer software application in history. Another year later, and it is integrated into every new Apple product’s operating system. AI in general has become super important, super quickly. A lot of people, even outside of the dev world, are afraid that AI will take their jobs, and in most cases, it probably will. It is already starting to feel like you need to learn how to ride this bullet train or get left behind. AI hasn’t just made Siri smarter at editing the grammar in your emails; it has already transformed how most people work. AI has, of course, changed how people code and build software, but it has also become a part of the tech stack itself.

Three to four years ago, I was really excited about neural networks and machine learning. Somewhere in the depths of my GitHub, there is a repo for a Python chatbot trained on Wu-Tang lyrics. Needless to say, I have been heavily tuned into AI and OpenAI and ChatGPT since its release. It’s funny how at first, using AI to help write some functions felt almost taboo; now it is completely expected. I have thought about writing a blog post about coding with AI, prompting, AI image generation, or maybe exploring some of the cool tech behind these new AI-powered apps, like LangChain, but I want to start by exploring the AI within the stack and how it is affecting and influencing the rest of the stack.

Aside from the conversational LLMs, one of the first AI tools that I integrated into my tech stack was GitHub’s Copilot. It was pretty exciting when it came out - to have AI actually in your IDE instead of having to copy and paste snippets of code elsewhere. I read that a single dev created Copilot, which unfortunately might be the coolest thing about it. If I didn’t have a subscription through work, I would not have made it past the 30-day free trial. I really only found Copilot to be useful when writing very predictable and relatively simple code. I have heard that Copilot is a little better these days, but AI is already pretty far ahead of just completing functions for you. What Copilot and the copy and paste method always lacked was context. Even with the ability to upload a file, you still lacked context of your entire project. This is why I thought Cursor was going to be a serious game changer. Cursor is essentially a VS Code clone that integrates an LLM of your choice — be it OpenAI’s ChatGPT or Anthropic’s Claude into your entire code base. Cursor is seriously cool. It has a conversational element, it has the prediction feature that Copilot has, it can review your code and fix your bugs and errors, and it can inject code directly into your project. Pretty wild stuff. Cursor operates like a subscription SaaS, with a flat monthly fee instead of using your own API keys, which is how most of the AI VS Code plugins previously worked. The flat fees are also pretty reasonable; they have a free tier, and the first premium tier costs $20 a month, which is pretty cool to have access to premium LLMs from OpenAI and Anthropic when those cost $20 a month each. Cursor might be the most comprehensive AI coding tool out right now, but OpenAI is already on track to making it obsolete. I don’t think it is even fully rolled out yet, but OpenAI’s new native app for Mac and Windows is pretty next level. You can now give ChatGPT access to VS Code, Xcode, and the Terminal/Command Line. So, the new ChatGPT app doesn’t just have the context of your code but your entire dev environment.

This next part might feel a little funny coming off of the previous blog post. That entire post was essentially about how freeing it felt to step away from frontend component libraries, and I already want to talk about another one. TLDR; I wrote how Tailwind, with a little help from Framer Motion, has allowed frontend development to take some big steps forward by ironically getting back to the basics of the foundations of traditional CSS. AI being in your tech stack means it will inevitably influence the rest of the stack, which is how I started working with Shadcn UI.

As Tailwind became the new standard in CSS, Shadcn UI popped up in a wave of other component libraries built on Tailwind. We now have Daisy UI, Mantine, Flowbite, Chakra UI, Oxbow, Preline UI, Flyon UI, Ripple UI, Sira, Mamba UI, Next UI, Kutty, Sailboat UI, Xtend… Exhausted yet? I have checked out most of these. For a couple of them, it was my actual job to become an expert in them… and the others were because I am a nerd? I’m not going to lie, some of these libraries look really cool, and being built on Tailwind is kind of exciting because you should be able to fully take control of them. However, a lot of my gripes with component libraries still hold true when using these. You are still using a third-party design system that will either have to be heavily customized or run the risk of looking recognizably stock. You obviously have to learn the libraries’ syntax and nuances. You also have to be careful about not bloating your app and increasing the build size with unused CSS bundles and extra JavaScript overhead. Sure, some of these component libraries support tree shaking, meaning only the components you import and use are included in the final build, but you need to look into the specific one you are using. I wasn’t using Shadcn UI at work or on personal projects, so when the hype really started to hit, I’ll admit that I had some real “Get off my lawn!” feelings about it, especially since it was built on yet another component library in the first place, Radix UI. So to use Shadcn UI, you need to learn two libraries (you don’t), you have to install each component via the CLI (spoiler: the shadcn CLI actually rules.), their website claims they aren’t actually a component library but clearly are a component library. I was not on board at first.

What (or who) helped Shadcn UI cut through the noise of all the other Tailwind component libraries was Vercel. Vercel is a hosting platform and the team behind Next.js. Since acquiring the dev behind Shadcn UI last year, Vercel has been heavily integrating it into their ecosystem. The most significant integration has been with v0, Vercel’s AI-powered text-to-UI creator. There have been a ton of apps touting AI text-to-UI generation popping up in the last year or so, especially in the Tailwind world. They usually feel like fun novelties and ultimately of a waste of time if you know how to code, so I was not super quick to adopt v0. But Vercel is hard to ignore when Next.js is one of your main tools. v0 has been the first AI UI tool where it feels like you are cheating. You don’t need to be an experienced prompt engineer to have it spit out some pretty impressive code. It won’t completely knock it out of the park, but if you are down with Tailwind and Shadcn UI, you can take the v0 code as a really nice foundation to customize and build off of or just brainstorm. I don’t really want to get too deep into the weeds here with how to use Shadcn UI, and I won’t say something like “the hype is real” with it, but it is definitely worth exploring. Shadcn UI is extremely flexible, has been a huge time saver and has become a welcome part of my stack because of the influence of an AI tool.

There are a ton of awesome LLMs out right now, each having their own strengths, but clearly the leaders of the pack and stars of the show are OpenAI’s ChatGPT and Anthropic’s Claude models. There are a few models from each out right now but for our context, the ones that matter are ChatGPT 4o and Claude Sonnet 3.5. For most cases these two are pretty toe to toe but there are some subtle differences. GPT 4o is a little better at zoomed in, low level, logic-heavy problem solving where Sonnet 3.5 has some better context retention for longer conversations and is a little better at high level architectural brainstorming and decisions. They are both prone to errors and need to be kept in line but there is a feeling that GPT 4o can go off the rails a little easier and will sometimes come across overly confident in the incorrect solutions that it presents.

AI is no longer an experimental layer on top of development. It’s already woven into the fabric of the modern tech stack and will only become more important. From text-to-UI tools like Vercel’s v0 to IDEs like Cursor, to powerhouse LLMs like GPT 4o and Claude Sonnet 3.5, AI isn’t just assisting developers — it’s influencing and reshaping how we build. If frameworks like Next.js and Tailwind brought us back to the foundations of web development, AI tools are pushing us forward in a way that feels genuinely revolutionary. Coding with AI feels less like outsourcing your work and more like unlocking new levels of creativity and productivity. Getting on board with these tools really feels less like AI is replacing us but augmenting us.


January 2025 addendum:
Last month, just days after I published this blog post, GitHub Copilot announced a free tier and a revamped VS Code integration. It is so awesome that I actually felt obliged to update this and mention it. I don’t want to take my words back because five weeks ago, I was genuinely underwhelmed with Copilot and now it looks like Copilot is about to make Cursor irrelevant. Copilot went and took the best parts of Cursor, chatting with 4o or Sonnet 3.5 right in your IDE with full repo context with the ability to inject code, and the newest features of the native OpenAI apps which are the IDE and terminal integrations and offer it for free. On top of that, if you upgrade to a pro account you get unlimited completions, unlimited chat messages in the IDE and the ability to have Copilot in your actual terminal, in the CLI, get pull request summaries, and more. The kicker is a pro account is only $10 a month or $100 for a year, so it is less than half the price of Cursor and already so much more. Isn’t it wild how insanely fast this tech is moving?

Adamadam hi

Adam