I generate unit tests, comments, and more. A critical aspect is well-defined system instructions. Defining system instructions and prompts with XML formatting provides significant benefits. Think of LLMs as a force multiplier for most use cases. Upload the codebase and generate comments, unit tests, functional tests, mc/dc tests, etc. What follows is a matter of review and edits to outputs. Never unquestioningly trust, and instead, slowly train and refine models for any given project, as this approach tends to reap rewards in my experience.
Lots of "information extraction" tools - basically taking in a bunch of documents and other natural language material and pulling out a few key pieces of structured data. This isn't 100% reliable but is useful as a first pass (where a human can investigate further) or a backstop (against a human missing something important and obvious but buried in a long file or huge volume of files).
I also put together a half-baked web extension to try to steer the Youtube algorithm. It basically scrolls through shorts and tries to watch+like high quality content and skip+dislike junk (and worse) to scrub it out of the feed as much as possible. It's only looking at the transcript and a couple of thumbnails, so it's not super accurate, but in the short term it's been working pretty well. I figure that Google can probably tell that it's not a human though, and might disregard its inputs (or ban me) in the long term. Time will tell.
I have. The advent of "reasoning" llms means you can trust them with judgments.
Chatgpt basically invented it in september of 2024. We are only coming up on the concept's 1st birthday. Open source options have only been out for 6 months.
I highly recommend using these for your own purposes.
I have used coding agents with great success. So in some way I'm building useful things using AI, and the people who created those agents absolutely created something extremely useful.
I also put together a half-baked web extension to try to steer the Youtube algorithm. It basically scrolls through shorts and tries to watch+like high quality content and skip+dislike junk (and worse) to scrub it out of the feed as much as possible. It's only looking at the transcript and a couple of thumbnails, so it's not super accurate, but in the short term it's been working pretty well. I figure that Google can probably tell that it's not a human though, and might disregard its inputs (or ban me) in the long term. Time will tell.
Use it everyday https://github.com/jharohit/team-timezone-wall
Also built a AI NDA tracker for all our NDAs in the company which is awesome! We will open source soon.
I miss the pre LLM days on HN when hype was spread around various topics instead of it just being about AI.
Chatgpt basically invented it in september of 2024. We are only coming up on the concept's 1st birthday. Open source options have only been out for 6 months.
I highly recommend using these for your own purposes.