Story Melange
  • Home/Blog
  • Technical Blog
  • Book Reviews
  • Projects
  • Archive
  • About
  • Subscribe

On this page

  • 1 What happenend in part 1
    • 1.1 Meal‑planning paralysis
    • 1.2 Recipescanner, previous versions
  • 2 Attempt #2 “Just ask ChatGPT”
  • 3 Idea: Let’s just chat about our recipes
    • 3.1 Going beyond chatting
  • 4 Getting to work
    • 4.1 Product engineering meets AI Agent
    • 4.2 How to start

The Rebirth of the recipescanner - part 2

python
architecture
llm
Back from the Dead, driven by AI Agent
Author

Dominik Lindner

Published

July 7, 2025

1 What happenend in part 1

1.1 Meal‑planning paralysis

Turning half a shelf of cookbooks into a week’s meals shouldn’t take longer than cooking itself.Yet this frequently happenend to me, when I tried to get recipes from my books, copy ingredients, and build a shopping list by hand. Family life can be stressful, and who has time to read through all the books and make creative meal plans? Me, no.

1.2 Recipescanner, previous versions

My first attempt at this was the Recipescanner, see part 1 In his different versions it turned Book pages in Paprika recipes. Aggregation of the cooking list was done by Paprika. This saved time with the mechanical tasks, but not with creativity. I still hat to hunt for the dishes, which fit well together.

2 Attempt #2 “Just ask ChatGPT”

Lately I noticed I will just turn to ChatGPT. While the recipes are quite ok, there often lack the little extra. Good cookbooks and special food blogs often provide this extra information. To my experience, online recipe collections do not provide all the little tricks in the recipe, as average cooks often write them. The same applies for complete meal plans. Ok, but somehow not that fascinating.

LLMs usually provide the average answers. And average cooks do not have recipes with the certain sparkle. Skillful prompt engineering or follow-up prompts could certainly surface this information from the vast amount of training data.

I think the main issue is one of uncertainty. You assume that there are special tricks for a recipe you do not know. But you do not know in what area of cooking they are: preparation, order of adding, temperature? Doing so requires often a very skillful intuition. In my experience, why expert programmers can get so much more from LLMs than beginners for programming? They have the intuition to ask the right questions.

3 Idea: Let’s just chat about our recipes

For our recipes, the solution could be easier. We rephrase the question. Instead of asking for the generation of a new (unique) recipe, we ask just to select from a collection of known recipes. This assumes our recipe collection only has outstanding books and notes, but we do not remember where to look.

What we would then do is a simple vector-based similarity search, which fits our request. Then we either just furnish the recipes or we make up a recipe based on the recipes we found. In the second case, it is much more likely that the answer contains the little extra we search.

The entire process of querying a database before generation is called Retrieval Augmented Generation (RAG).

This is something which can be done with local chat apps. Gpt4all can create embeddings for the recipes and use a generic chat to talk about them.

3.1 Going beyond chatting

But wait, there is more we could do. When we create a meal plan and a shopping list, we create data. We could use that data.

What if we had an agent? We could ask to generate plans for us based on our wishes. It could also check what weekly promotions exist or offer seasonal suggestions. Even better, you would know what you cooked last week. He could know what you like and could ask what you liked or disliked about last week’s meals.

It is like a personal chef. Only he does not cook. Maybe that could be an extension :-).

This article is not only about the history of the recipescanner. It is the kickoff to an endeavour to create an AI agent that chats with about recipes and meal plans.

4 Getting to work

4.1 Product engineering meets AI Agent

The general direction is clear. A software that does all the meal planning.

With all the hype about AI agent, we are going to build one.

One issue, what kind of tool should the agent actually work with. Integrating it with Paprika, my current app is too complex. I could certainly search for a basic recipe app and try to integrate it.

One key aspect of my workflow is the digitalization. Few to no apps have this in a way I want it. Therefore, I built a meal planner from scratch with Python and React.

There is some upfront work to generate the meal planned before getting to the actual LLM work.

The Versions I currently aim for:

  1. Replacement of current Paprika based workflow. Web-app for manual recipe handling via frontend. Basic chat app with different model providers (local for testing and online for better performance). Scanning and Recipe generation, as in the previous version.

  2. A Langgraph based chat app to talk about recipes in database and meal plans. RAG Chat.

  3. Trigger creation of new recipes and meal plans. Iterative workflow.

  4. Take nutritional and seasonal information into account. Diet plan. Backfill missing data in recipes.

4.2 How to start

Back to software engineering. In Do you know the hidden paths of your code, I talked about the importance of architecture.

Another question I follow in this project: LLMs are a big missing puzzle piece in creating better architecture, but how to use them effectively? With all the hype on AI code generation, I was quite optimistic. I would advance with the legacy part.

After a lot of discussion, I asked ChatGPT to create a good starting prompt for my idea:

You are a senior full-stack engineer and AI-agent architect.  
Task: walk me step-by-step through building a **chat-based weekly meal-planner** with these features:


1. **Tech stack**  
   - Python 3.12, FastAPI backend  
   - Postgres + pgvector for recipes & embeddings  
   - LangChain + LangGraph for an agent with a planning loop and persistent state  
   - OR-Tools (or PuLP) to solve the nutrient/effort constraint model  
   - React (Vite) chat UI

1. **Core requirements**  
   - take pictures and use google ocr to get json
   - Ingest/parse recipe JSON → add tags, nutrition, effort minutes, fill a database entry. currently i use paprika recipes format to store data
   - bulk mode for pictures
   - Vector search for recipe Q&A (RAG node)  
   - Planning node builds a 7-day plan that:  
     • hits user kcal/macro targets ±10 %  
     • caps hands-on cooking time per day  
     • avoids any recipe used in the previous 3 weeks  
   - Approval loop: if the user types “change”, the graph re-plans until accepted  
   - On acceptance, write `meal_plan` table and return a shopping list grouped by aisle  
   - Persist `recent_recipes` (rolling 21 recipes) and chat history in Redis
   - ui: similar to chatgpt. left side menu and session overview. ability to browse through recipes. should work on desktop and mobile.

3. **Deliverables to produce in this session**  
   - High-level architecture diagram  
   - Database schema SQL  
   - LangGraph code skeleton with nodes and edges  
   - Sample FastAPI route that streams assistant responses  
   - Minimal React chat component calling the API  
   - Docker-compose file for Postgres+backend

Give concise explanations; focus on runnable code and folder structure. Assume I know the basics—skip introductions. After each section, wait for my “next” before continuing.

In stage 3, we already see the issue. The computer would dive directly into coding. With such an extensive project that would lead to a big mess.

Instead, I spent some time on the architecture. You can find the details below.

The first version offered by the language model. Without diving into the details. There seem to be too many blocks.

After some modifications I settled on this.

This leads me to the next big issue. Big bang building is equal to the Big bang integration. As such, we need incremental working versions.

I kicked it off with a JetBrains AI Assitant do the coding. It started with the impressive generation of 47 files and a fully functional mocked frontend.

Wow, at that speed, I would finish in a week.

Well,…

…

Spoiler for the next article. Not so fast. Remember: “The only way to go fast is to go well”. And it turned out the LLM does not go well for bigger projects.

Like this post? Get espresso-shot tips and slow-pour insights straight to your inbox.

Comments

Join the discussion below.


© 2025 by Dr. Dominik Lindner
This website was created with Quarto


Impressum

Cookie Preferences


Real stories of building systems and leading teams, from quick espresso shots to slow pours.