How a student’s company is working to hold AI accountable
It started with a robot that could not stay on track. In fourth grade, when most children are figuring out times tables, Noah Moscovici was coding a line-following robot with his dad — a tiny Arduino robot that kept veering off its course. But Moscovici kept going, not because he had to, but because he wouldn’t let himself walk away from a problem he knew had an answer.
He didn’t know it then, but that line-following robot was only the beginning of the complex, unimaginable machines he’d go on to build.
Growing up in the Bay Area, Moscovici was always fascinated with how machines worked. That curiosity pushed him to seek work, even in unexpected places. In middle school, he started freelance work at a local escape room in San Francisco. His first major puzzle involved two bronze statues that completed a circuit when players held hands, and each touched a statue, triggering the door to open through skin-to-skin electrical conductivity.
Throughout high school, his passion grew alongside his coding expertise. Whether freelancing or building for his own fun, he was always creating. By graduation, Moscovici wasn’t sure college was the right fit. And in June 2022, he took a gap year. He landed a job as a software engineer with Uplift, where he developed and maintained the technology behind the company’s “Buy Now, Pay Later” services.
Around the same time, the release of new AI chatbot technology was generating buzz. After crossing paths with a local company, IRL415 — which develops tools to enhance human connection and trust — Moscovici became more aware of the risks these platforms impose.
The problem? These new AI systems weren’t retaining information well. One high-profile example came from New York when a chatbot advised small businesses to break the law. By June 2023, Moscovici’s concerned interest shifted to experimentation.
“I spent some time just experimenting and testing different approaches to solving this problem, but there is no great way to test AI chatbots, without loads of manual work,” Moscovici said.
Despite having a stable job, he realized jumping into the corporate world meant missing out on something more significant to him: college life. After conversations with his family, he enrolled at Cal Poly in the fall of 2023, drawn in by the school’s strong computer science program and proximity to home.
Once on campus, he found himself with long stretches of free time — and his mind kept circling back to the AI questions he had explored over the summer. The more he studied, the more intrigued he became by the blind trust society placed in chatbots and, even worse, what might happen if these systems were wrong.
Then came his ‘aha’ moment: “What better way to test the accuracy of AI than to use AI?”
That realization sparked the idea that would soon turn him into the CEO of his chatbot testing platform, Bottest.AI — a product that checks whether an AI service is functional, reliable and accurate.
The tool works as a web-based extension that layers onto existing chatbots like ChatGPT or DeepSeek, helping ensure that the information provided is factual and consistent.
“The cool thing about Bottest.AI is that you don’t need to know how to code to use it,” Moscovici said.
Ellen Smith, a journalist for Trend Hunter, described the platform as a “no-code platform designed to automate chatbot testing, saving businesses significant time and resources.”
As more businesses turn to AI chatbots, the demand for accurate, real-time fact-checking has also grown.
“In-person fact-checkers have the potential to miss something,” Chief Revenue Officer Brennan Pappakostas said. With Bottest.AI, Pappakostas says, “you can have this program running on your computer even if you aren’t there.”
After nearly a year of development, the company officially released the platform in July 2024. Mosocivici and Pappakostas are now working to introduce the product to businesses that rely on chatbot technology for customer service, information sharing, or internal operations.
What initially started as a weekend project to keep a robot on track has transformed into a career spent holding technology to the same standard – with the same persistence to fix how machines think.