Training AI agents to use computers, browsers, and software is one of the highest-potential opportunities for AI. To date, however, this capability is still unreliable. The emerging method to improve this is called Reinforcement Learning with Verifiable Rewards (RLVR). However, researchers are currently bottlenecked by a lack of high-quality simulators and task + verifiers.
To solve this problem, we’re building Westworld, a fully-simulated internet made up of synthetic versions of the most common consumer and enterprise apps. Agents use Westworld to learn how to do economically valuable tasks.
For example, AI agents can practice planning vacations on a simulated flight booking site (https://flights.halluminate.ai/), or learn how to reorganize outdated information in your sales platform, or train to do financial modeling directly in a spreadsheet.
Here’s a demo showing our flight booking simulation: https://www.loom.com/share/74a3b28067e24c1b886054ba90a90aa5.
How it works: AI agents access our environment and are given a task + verifier. A task is basically an objective for the agent to achieve, for example "Book me a flight from SF to NYC on this date with x, y, z filters.” A verifier is a programmatic way to determine if the task was successfully completed. For example, in this case it might be a json that checks if the final flight data matches expectations. These signals can then be used to calculate a reward in RL.
The more simulators we build, the more AI labs can improve on capabilities that computer use agents are currently weak at. One of our customers saw a ~20% improvement in date-picking performance when training on our flight booking simulator.
Two things make this hard so far:
(1) The simulations have to be realistic. You can’t get away with a vibe-coded “80% solution” because even small divergences impact performance. Generating simulated data is even harder. For example, massaging flight data to look realistic took a lot of trial and experimentation.
(2) The tasks you train agents on have to be well-chosen. They are only valuable if they reflect work that people actually want solved. We need a lot of feedback from domain experts to get this right.
That said, we find this work incredibly interesting and are excited to tackle these issues. A few things we are pumped to ship in the near term: - Ability to train on long-horizon tasks by stringing multiple simulators together for extended workflows; - Procedural data generation. Instead of synthetically generating all the data upfront, how can we model data generation so that our simulators are populated procedurally as agents explore (think Minecraft); - Open source! We plan to release our environments to the public so developers/researchers can hack them for their own experimentation.
RL simulators are just one part of our business. The other part is around human data creation (think Scale AI but for computer use). We produce off-the-shelf pre-training/fine-tuning datasets, expert human evaluation/error analysis, or any other data needs for our customers. There are also a lot of exciting overlaps between the two - for example, using human experts to help create our simulators/tasks. Happy to go in more detail, but we thought that simulators would make for the more interesting HackerNews post :)
Finally, about us: Wyatt and I met while studying CS at Cornell and have been living and working together for over 7 years. I previously led product/research at Capital One Labs, where I launched one of the first AI agents in banking. Wyatt previously was a Cornell Milstein scholar and did large-scale data engineering for 2 early-stage startups in NYC. We left our jobs last year, and faced these problems first-hand while building evals for our customers who were browser/computer use agent companies.
If anyone has any questions, feedback, or thoughts please let us know! Looking forward to your comments.
My own experience makes me lean toward thinking that the truth is somewhere in the middle in this situation, and that simulators like these will be valuable. I've been experimenting a lot with computer use on my website Bingeclock, passing through different prompts along the lines of "make a movie marathon based on X." The newest agents are consistently impressive, while also being consistently imperfect in surprising and interesting ways.
Whether or not all the labs are already running this kind of thing internally for themselves, you would know better than I. But it's an idea that seems very useful nonetheless. Congratulations on the launch!
re: labs doing this internally. They definitely are! However, the scale of sims buildout is going to be massive, probably many orders of magnitude above what we have today. We think it makes sense for one central player to do this because a really good simulator can be used by multiple people at once. It doesn’t make sense for every AI lab/company to build out their own environments if an industry standard catalog exists.
we share the public/consumer simulators, but we also build bespoke environments on a per customer basis (think enterprise sites or even full VMs loaded with applications and data).
environment creation scalability is a big priority for us. we currently automate most of the process, but it still takes a fair bit of manual work to finish them and to get the details right. there is some reusability across environments, for example, we can use the flight results generation code in any travel/flightbooking sim. we also have some semi-automated approaches for creating tasks and verifiers. but still lots of work to be done here.
Engineering: QA automation is huge, closes the loop on "fully automated" software engineering if another computer use system is able to click around and help identify bugs in software
Deep Research: probably the biggest use case for computer use right now, finding information that isn't easily indexed or accessible via APIs.
General RPA: This is industry specific, but lots of just everyday knowledge work involves data transfer between many platforms that sucks and no one wants to do. A great example is Epic in Healthcare. SO much labor is employed just to write and read information from this desktop app that isn't easily accessible. Imagine a computer use system that can do automated data pulls at scale for legacy desktop apps. This is a huge huge use case, and something that we're excited to try and improve with simulators of things like Epic, SAP, Salesforce, etc.
Consumer: Lots of just general everyday tasks. I would recommend checking out https://yutori.com/ if you're interested in seeing how a computer use agent can be helpful in your day to day. Its fun for daily news reports, restaurant reservation checking, etc.
If it gets a major travel detail wrong, purchases a business class ticket on accident, etc. and I need to adjust the booking by calling the airline, then I’m way less happy than I was if I just bought the ticket myself. Not to mention what happens when Google flights gets a UI refresh and knocks the accuracy rate of the agent down even 10%.
Digital criminals are gonna love it, though.
I’m personally much more interested in automating browser tasks that aren’t economically valuable because that mitigates the risk.
I think this will probably be a mixture of automated QA/engineering and scale.
Another interesting path is actually partnering directly with software providers to offer their platforms as simulators IF they see there is a competitive advantage to training agents to perform well on their UI.
This idea we're really excited about, but it would require a company to see real revenue potential in enabling agentic access vs not. I'd say we're still on the "block them out" phase of the internet (ex. see Cloudflare's recent post about bot detection: https://blog.cloudflare.com/perplexity-is-using-stealth-unde...)
However, in talking with my AI Labs, their perspective on flight booking is a little different. "Solving" flight booking requires the AI agent to solve a LOT of hard problems. Namely, personalization, context, weighing multiple options, interacting with the UI, math, then wrapping that all up into a coherent response. The thought process is IF a computer use agent is able to solve flight booking well, then we will have developed many other powerful primitives that will scale to other problems.
So as a standalone use case, I'm inclined to agree this might not be where the most agent traction is seen. However, as a research/capability goal, there are some generalizations that could apply to other very important use cases.
If you're rich, you can just look for the ticket at the time you like on your preferred airline and buy a first class ticket, whatever the price, for whenever you want to fly, even if it's tomorrow. For the rest, that's not practical. So the flight search has to begin a few months out, with the burden of doing multiple searches (in incognito mode) across various airlines and/or aggregators, in order to optimize various factors. This takes a non-trivial amount of time. Add in looking for hotels and rental cars, and for some it's fun, for others it's an annoying burdensome chore that stands in the way of being on vacation.
It's just an example use case though. Similar to how "robot maid" that folds clothes isn't the be-all or end-all for robotics, if an AI is able to perform that task, it's going to have capabilities necessary for performing a wide variety of other tasks.