Skip to content

Thriftway Pharmacy

Log in Cart

Item added to your cart

View cart
  • Example product title

    Regular price $19.99 USD
    Regular price Sale price $19.99 USD
  • Example product title

    Regular price $19.99 USD
    Regular price Sale price $19.99 USD
  • Example product title

    Regular price $19.99 USD
    Regular price Sale price $19.99 USD
  • Example product title

    Regular price $19.99 USD
    Regular price Sale price $19.99 USD

NLP models are good at generating text but are imperfect at reliably translating intent to executable, deterministic database queries without extra “reasoning” and error-check logic. In today’s rapidly moving AI landscape success isn’t just about large modelsit’s about orchestrating specialized MODEL capabilities.You add your AI model as a component to/of your app: these modular components, when securely bound to data, deliver outcomes that are correct: auditable and context-aware.

.

You add your AI model as a component to/of your app: these modular components, when securely bound to data, deliver outcomes that are correct: auditable and context-aware. 

In plain English, you train or instruct your LLM to give it its special capabilities (example: what data to fetch) and give your LLM access to the right data (example - database or API). But how exactly do you instruct your model? 

I pitch the 

Hello Everyone, 

Please, raise your hands who is building the generative AI apps. Now, please, raise your hands who is experimenting or “Do-it-yourselving” with Vibe coding?

Okay, both groups by now’ve realized that the quality of answers depends on the data you feed AI and how well your AI model is able to fetch the exact data that it is needed. By now many of you realized you do not know how to do it. 

You’re running into a classic “free-text → bad query → wrong product” problem. Fix it by forcing the model to produce a validated, catalog-aware query and by retrieving in stages (filter → retrieve → re-rank → verify). Here’s a tight, practical recipe you can drop in.

The generative AI apps in many cases are the models' warpapers. Of some sort. 

Our tool helps all of those builders to train your AI model to be able to retrieve what exactly is needed. It’s not RAG, not MCP but a very simple first step and the last step. 

You know there are services, lots and lots of them called text-to image, text-to-video, image-to video nd etc, this is text-to-structured query. 

I will give you a few examples:

For example, We built a food discovery service. It is called broccobot.com . If you are in Manhattan, you can quickly find the food you want via broccobot.com at the location you need. Before we’ve created our tool, the bot would find the staff but not precisely. Even though there is a database for restaurants. The restaurants upload the dishes they are selling at the moment (UBer-like), and broccobot.com  should use those dishes in search results for people to find. So, the bot should be able to translate a user query into the (Sypher) structured queries, database queries. So, with this training the bot will do just that. Under-the-hood, there are two models, both are trained on the tool: one for geographic places, you might say near 42 street, or you might say below grand central, or you might say - “near Macis”, “where there is a goog Chinese here with organic ingredients”, and the second LM will find the “good chinese” in the location.

Another example we built for interior designers. It's an interior design service (they asked for) called roomAI or ___, Interior designers struggle to find particular furniture items for their vision for their clients spending days and days online. So, this service helps to create a new design style, tweak it and then find a very particular furniture items according to the description. This service does just that after the training. Here 3 models under-the-hood: geographic, furniture items and a few design-styles trained LORAs,

Another, and teh most important case is, of course, in health-related. if a person has questions, or alignments or symptoms,, the AI model should be trained to translate intothe graph database queries. Yes, there are tools who do this, for instance, a langchain tool for Sypher queries, but they are fragmented, expect that the AI model knows the structure of the database and they are not easy to use for the masses. 

Which brings us back to this audience, raise your hands how many of you are able to use a Langchain or neo4j? __ See, the rest of the audience needs something simple.

So, what is teh grand business goal? We develop the light-weight, easy the launch Shopify -likje solution for any e-commerce for selling via the search - finally. Its all there technology, but someone needs to launch iut firs and that's what we are going to do. 

Why the search-based ecommerce is needed is because it cab be done over the voice for msny people who are not able for various reasons buy online.

Where we are, we aree in teh possess to get credits from AWS for processing. That is why i am here, joining the accelerator, means automatic approval for AWS credits. Our investors will have a stake in all three solutions i mentioned. 

The group i am working with are from India and from opensource community.

To make it light0-weight easy to use, under-teh-hood we use lotrs of things like overlapping graph-based deep learning, diffusors, model training approaches again for transformers, entity retrieval and etc and for diffusors, we use mcp approach a lotr, we use know vibe-coding, agent-

Professionally I’ve been working as a software and technology architect (for many years).In AI probably from the beginning. My focus is the solution architecture and a tool set for its rapid development.

___

Longer Version:

Today, we are increasingly developing generative AI apps. The majority of apps are some kind of model wrappers. There are many wonderful tools to ship the apps fast. But because of the platform itself and the entire AI ecosystem are moving super fast, we do not have some of the tools which are needed for developing very accurate production-grade serious apps. 

By ‘serious “i mean non-prototype, but a real-worrd production-grade apps. Some tools are here but still are the prototypes themselves, and some are not there-yet. The more apps we are shipping, more tolls would be needed.

One such example is what I would like to pitch today. For someone here who is not familiar, let’s take examples: you know there are the “text-to-text” modes, there are “text-to-image” modes, there are “speech-to-text” and etc. That lucking tools that we often need are  called “text (NL) - to- structured query” (Text-to-SQL or natural language to structured query).

Our tool allows anyone to create a data set for training your model for your particular situation. What it does is it takes a text query and transforms into the database query or queries.

In many modern apps these days, you have a natural language interface talking with some kind of GPT model behind and then this model takes your request and goes to fetch the data which it brings back to you as an answer. In many situations this “fetching” is not performed well. This is because the model has to be precise which is not a “feature” as we know. 

Let's break this down simply — imagine you have an app where a user types a question like, “Show me the top 5 customers by sales last year.” The app uses a Large Language Model (LLM) (like GPT-4) to understand and respond in natural language, 

But …

but your data is stored in a structured database (like SQL) or in many databases including semi-structured. So how does the LLM actually “fetch” the data?

In the step-by-step process LLM generates a database command. 

The app then runs this query. The app then sends this SQL, SPARQL or other command to the actual database system (for example, MySQL, PostgreSQL, or a financial API, or graph). The database processes it and returns the real data as a result table. Finally, the LLM (in your app) takes the initial request and that result table and turns it back into a natural language answer.

You can think of it as the LLM being a translator between human questions and the database’s language. How does the LLM know what tables and columns exist in the databases (the schema part)? That’s a key piece of how it writes the right query.

What is the solution? You train your model how to be precise at addressing the data sources for retrieving the required accurate answer. Basically it means that the model takes a user query and creates a structured query to whatever data source it has in whatever query language it needs to be. Our tool allows you to train your model fast to do it well. 

Databases use properties, values, nodes, and relationships to represent complex data. The query languages enable efficient modeling and querying. However, using query languages requires specialized knowledge on an expert level. Our tool aims to bridge this gap helping LLMs translate natural language queries into the structured query languages.

We are raising $150k to promote this tool to the developers and lots of people who are building apps using today's vibe coding or low coding tools and facing this challenge. 

Many of the developers that we have been talking to kind of skip this part and do either in-house model fine-tuning or shipping to the client as-is and do not get responsibility for what might be happening further. In-house model training does not work, because you have to retrain-refine-tune every time a new model is coming, and a good solution is a model-agnostic solution, you cannot fine-tune all of them. 

And, of course, it was our own pain at first. Once we have created this tool, we have used a few apps. I can show you a couple of examples.

One is a food discovery bot called broccobot.com. You can take a look. It works wonderful but it might find you a food option not in the location where you want it to be. Again, “precision is not a feature” of LLMs rephrasing teh popular meme. However, once we’ve trained the models we use under-the-hood (we used Gemimi, LLama and ChatGPT) with our tool, they all started performing well. This is all because ________.  So, the  new version called broccosuperbot.com finds precisely what you need while still having the ability to chat with you. The broccobot is for the restaurants, they pay for promotion.

“BroccoBot is A centralized AI-native platform for  restaurant brands, integrating B2B sell-in tools, localization, and automated discovery. (25+ customer brands).

Another example is an interior decoration app called RoomAI, where the furniture stores pay to promote their products and interior designers for finding the precise items they envision. A user uploads a room picture, an AI creates the design style. Once you like the style, the app finds the furniture and other items from the catalogs. The service is mostly for interior designers, from their input they spend days and days trying to find a proper item for the client. That is what the app does perfectly and instantly.

A centralized AI-native marketing and sales platform for furniture and retail  brands, integrating virtual interior try-ons, B2B sell-in tools, localization, and automated brand marketing. (15+ customer brands, including __, __, and more). Watching closely interior desgners, we witness the industry's technological limitations first hand, from costly photoshoots to siloed marketing systems. 

One more app is the Longevity app called RERUN. It’s been very difficult to get a functional biochemical profile picture of your body. So, what this app does is it keeps your biochemical profile, updates it and depending on your questions, the symptoms, the lab tests or external population statistics, it gives you a quick hint on what might be developing. This is one of the most difficult use cases out-there at all. This is because, when you talk with GPT, it then has to go and fetch to compare factual data from a wide variety of very specialized databases and hundreds of them - all at once. So, that a query that GPT has to create is a structured query to the graph database. Why it matters: Fixing the industry’s fractured content workflows and reducing time spent on finding personalized tailored info.

More on this topic:

Turning a user’s natural language request into a structured database query within a generative AI app is a complex process with several critical challenges behind the scenes. This remains a weak spot for many gen AI wrapper apps because reliably connecting flexible human intent to precise, structured data retrieval is technically difficult, requiring advanced understanding and accurate translation at every step.

From such things as entity and relations extraction to …

Key Technical Challenges

  • Ambiguity and Context Interpretation
    Natural language can be vague, context-dependent, and open to multiple interpretations, making it hard for models to figure out exactly what the user wants and map that intent to the right structured query format for databases. Even similar requests can require different database calls depending on context.

  • Schema Mapping Difficulty
    Models must understand not only the user prompt but also the shape and meaning of the database schema—what tables, columns, and relationships exist, what they mean, and how data is connected. Schema names don’t always match "real world" terms, and edge cases or custom setups require specialized logic.

  • Complex Query Construction
    Translating human requests into accurate SQL or API queries can require combining filters, aggregations, joins, and nested logic—often more sophisticated than simple keyword mapping or slot filling. Small misinterpretations can lead to retrieving too much, too little, or the wrong data.

  • Security and Robustness
    Generating queries on-the-fly from user input exposes the system to risks like SQL injection, data leakage, or privacy issues unless carefully sandboxed and validated.

  • Error Handling and Feedback
    If the structured query fails or returns no results, the app has to interpret why and inform the user meaningfully, adapting or retrying intelligently. Most wrappers lack nuanced error recovery.

Why It’s a Weak Spot for Gen AI Apps

Gen AI wrapper apps are usually designed as flexible interfaces for multiple data sources and use cases, meaning they must generalize query translation rather than hard-code it for one, increasing their vulnerability to ambiguity and edge cases.

Dynamic schema or data structure changes break brittle mappings, causing failures or inaccurate results unless the system continuously adapts.cards

Payment methods
    © 2025, Thriftway Pharmacy Powered by Shopify
    • Privacy policy
    • Choosing a selection results in a full page refresh.
    • Opens in a new window.