Get the latest tech news

In the land of LLMs, can we do better mock data generation?


"Lorem ipsum is latin, slightly jumbled, the remnants of a passage from Cicero's _de Finibus_ 1.10.32, which begins 'Neque porro quisquam est qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit...' [There is no one who loves pain itself, who seeks after it and wants to have it, simply because it is pain.].

We could have tried extending beyond the zero-shot learning methodology and fine-tuned prompts further, but it didn’t take long for us to realize we were spinning our wheels with this approach and needed a more deterministic foundation. As the french proverb goes, “Il ne faut pas mettre la charrue avant les bœufs” (loosely translated to “We must not put the cart before the horse”). With the lessons we have learned from these iterations, we are looking forward to tackling more complex challenges, whether it is further optimizing for unique constraints, supporting composite types and multi schemas, or integrating more cost-driven LLM strategies.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Photo of Land

Land

Photo of mock data generation

mock data generation

Related news:

News photo

Inference framework Archon promises to make LLMs quicker, without additional costs

News photo

DeepMind’s SCoRe shows LLMs can use their internal knowledge to correct their mistakes

News photo

Launch HN: Panora (YC S24) – Data Integration API for LLMs