Martin Hernandez Navarro offers a practical perspective on Big Data, Data Science, and AI in this briefing for Lean Culture.
Martin Hernandez Navarro
on Big Data, Data Science, AI
A Practical Guide to Choosing the Simplest Stack That Creates Value
If you’ve ever sat in a meeting where someone says, “We need to do something with AI,” you’ve probably felt the next question coming: What exactly are we trying to accomplish and what’s the simplest way to get there?
In the video below, Martin Hernandez Navarro, founder of TIDI Systems provides a practical, experience-based walk-through of how to think about big data, data science, and artificial intelligence in organizations. He covers what each one is for, when to use them, and (just as importantly) when not to.
Why you should watch the video: this is a grounded talk for builders and operators—not a hype reel. If you’re leading a product, engineering, analytics, or ops team and you’re trying to sort through AI noise, this video will help you:
- stop mixing up big data, data science, and AI
- focus on decisions, workflows, and measurable outcomes
- design for error cost and accountability from day one
- choose an architecture that’s justified by scale not by fashion
If you’re evaluating a new initiative, the frameworks here are the kind you can apply in your next planning meeting—immediately.
The core problem: “AI” is not a strategy
Martin Hernandez Navarro frames a common pattern: teams hear AI is hot and want to “implement AI,” but they often lack a clear goal, measurable business impact, or even the right data foundations. The point of the talk is to replace buzzwords with a simple question:
What decision or action are you trying to change and how will you measure that it improved?
From there, Martin draws clean lines between three disciplines that are frequently conflated.
- Big data: architecture for scale
Big data is not a model. It’s an engineering approach for distributed processing across many machines when datasets are too large to fit on a single machine. It’s about compute power, scalable memory, and reliability across a cluster. Big data helps you process more. It doesn’t automatically help you decide better. - Data science: insight to improve human decisions
Data science is about turning data into insight so a human can make a better decision. It’s grounded in statistics, modeling, and analysis, and it starts with a specific business question. Data science informs. It doesn’t automate. - AI: automation through learning from examples
- AI is about systems that act autonomously based on patterns learned from examples rather than explicit rules. AI is used when decisions need to happen automatically, often at a speed or scale beyond human intervention. AI automates.
The difference between data science and AI
Martin boils it down to this:
- Data science produces insights so humans can make better decisions.
- AI learns from examples to make decisions automatically in new situations.
That distinction matters because it shapes everything that follows: tooling, workflow, accountability, risk management, and how you measure success.
Two project examples that make the ideas real
Example #1: Retail SKU optimization—pilot first, then scale
Martin describes a retail project across 250 stores to identify low-rotation SKUs within product families and recommend removals. The solution included a dashboard built on purchase data broken down by product, store, and time, with an emphasis on seasonality patterns (summer vs. winter, holiday spikes, etc.).
The implementation was deliberately staged:
- Pilot (15 stores): Data fit on a single machine; simpler architecture; models built in Python with a SQL data source.
- Rollout (250 stores): Models stayed largely the same, but the data no longer fit in memory—so the architecture moved to a big data cluster with a pipeline that materialized data into container files suitable for large-scale analytics.
The key lesson: big data entered the picture because of scale, not because the value proposition changed.
Example #2: Digitizing paper receipts—workflow beats model sophistication
In another project, the team digitized paper receipts by taking a photo and extracting structured data. They evaluated:
- A more ambitious approach (OCR + NLP) with broader generalization potential but heavy variability and tuning costs, and
- A simpler approach using a third-party OCR service plus coordinates and rules—faster to test, but less generalizable and with cost considerations.
They chose the simpler option to validate market acceptance and then discovered the biggest adoption lever wasn’t the model at all.
The original workflow required photo first, then email entry, which slowed down both customers and staff. By reversing the order—email first while the employee is serving the customer, then photo afterward—adoption doubled.
Takeaway: workflow drives outcomes more than the model does, especially early on.
Three ways data creates business value
One of the most useful sections of the video is Martin’s value framework:
There are only three ways these disciplines create business value:
- Improve human decisions ? data science
- Automate decisions ? AI
- Scale what already works ? big data
And a warning: if your project doesn’t clearly fit one of these three, it’s likely to create cost, not value.
Martin adds practical criteria:
- The decision must happen frequently enough to justify the effort
- Economic impact must be measurable
- There must be an accountable owner to act on outputs
- Outputs must be embedded in daily workflows
- If the system disappeared tomorrow, something measurable should break
That “what would break” test is a sharp filter for vanity dashboards and disconnected analytics.
Risks, errors, and why LLMs raise the stakes
Martin highlights two major risks:
- Error cost: A model can be “95% accurate” but the remaining 5% may carry catastrophic consequences depending on the domain.
- Data quality: “Garbage in, garbage out”—except now it’s expensive garbage.
Martin emphasizes that hallucinations and errors must be managed, not ignored. Often requiring deterministic controls between model output and customer-facing decisions.
The “simplicity stack” questions you can steal
Martin closes with a lightweight framework for choosing the simplest setup that can work:
- Does the data fit in memory on a single machine now or soon?
- How fast must the decision happen to create value?
- Are you supporting a human decision or automating it?
- Are inputs structured or unstructured?
- What model complexity is truly needed?
- Who will operate and maintain the system over time?
Start simple, prove value, then scale—not the other way around.
About Martín Hernández Navarro
Martin Hernandez Navarro is a mathematician and physicist with a strong business orientation, specializing in product development, software engineering and technology strategy. He began his career at Accenture Madrid in 2016, moved to investment banking at UBS Zurich in 2018, and founded TIDI Systems in 2020, where he develops software solutions and technology prototypes in areas such as big data, data science, applied artificial intelligence, recommender systems, retail profitability analysis, and both front-end and back-end systems using Python, Rust, React JS and modern cloud architectures.
He also works as a consultant for SMEs and growing companies, supporting them in operational improvement, financial analysis and the design of business strategies. He combines technical depth with business insight to help organizations optimize processes, make data-driven decisions and translate complex challenges into impactful technological solutions.
Related Blog Posts
- Mark Bennett on Using Claude Code for Application Development
- Mark Bennett: Using Claude Code in Teams
- Andrew Shindyapin: AI’s Impact on Software Development
- Alex Panait on Current Trends and Possible Futures for AI
- AI in Action: Practical Automation by Alex Panait
- Matt Trifiro on Lessons Learned using AI for Marketing
- John Nash on AI Workflows for Outreach
Image source: Licensed from 123RF 123rf.com/profile_halaluya

