• AI Engineering
  • Posts
  • [Hands-on] Build a Multi-Agent Financial Analyst Using Llama 4

[Hands-on] Build a Multi-Agent Financial Analyst Using Llama 4

.. PLUS: Fire-1 Agent, Markitdown to turn any document into LLM ready data

In today’s newsletter:

  • [Hands-on] Multi-Agent Financial Analyst using Llama 4 and Crew AI

  • Fire-1 Agent from Firecrawl: Scrape entire webistes with just a prompt

  • Markitdown from Microsoft: Turn any document into LLM ready data

  • Top Tutorial: Stanford AI Agents Course

Reading time: 3 minutes.

Multi-Agent Financial Analyst using Llama 4 📈

Meta recently released Llama 4, featuring two models: Llama 4 Maverick, a 400B parameter mixture-of-experts model designed for deep reasoning, and Llama 4 Scout, a smaller, efficient model optimized for long context on a single GPU.

Today, let us show you how to build a multi-agent financial analyst.

Input a simple query like a stock symbol, and the agent stack delivers:

  • A clear Executive Summary

  • A structured Performance Report

  • Actionable insights in a clean, readable format

The code repository is linked later in the issue.

Tech Stack:

  • SambaNova AI for Llama 4 Maverick inference

  • CrewAI for multi-agent orchestration

  • Custom YFinance tool to fetch real-time stock data

Let’s implement it!

Step 1: Get API Access

Grab a API key from SambaNova AI to use the Llama-4 Maverick and set it as an env variable.

Step 2: Create a custom YFinance tool

To give our Finance agent direct access to real-time data, we create a custom CrewAI tool using BaseTool package. This ensures agents fetch live data, not rely on old knowledge.

Step 3: Define Agents

We define two agents using Crew AI, both powered by Llama-4-Maverick from Sambanova:

  • StockAnalyst Agent: The role of this agent is like a seasoned Wall Street analyst. It uses YFinance to fetch live stock data and breaks it down to understand the market pulse for the given symbol.

  • ReportWriter Agent: This agent acts as a report specialist. It takes the raw analysis and turns it into a well-written, professional report that is easy to understand and presentable.

Step 4: Define Tasks for the Agents

  • Analysis Task: This task directs the StockAnalyst Agent to use the stock_data_tool to retrieve live data and analyze key metrics such as price, 52-week range, P/E ratio, and other financial indicators. The output is a set of findings that highlight the stock’s current performance.

  • Report Task: This task instructs the ReportWriter Agent to take the analysis from the StockAnalyst Agent and format it into a well-structured, professional report, presenting the findings in a clean and understandable manner.

Step 5: Assembling the Crew

Finally combine the agents and tasks using CrewAI, linking them together to execute the analysis and generate the final report with a single input.

​You can find the entire code for the app in this GitHub repo → (don’t forget to star the repo)

Firecrawl has released FIRE-1, a new AI agent designed to scrape websites using only a prompt!

It can navigate dynamic websites, interact with content, and fill out forms to gather the information you're looking for.

Just add an "agent" object to your API request with instructions on what to find.

The agent handles the rest - planning its own path through complex websites to gather exactly what you need.

Top Open Source Repo - Markitdown

Turn any document into Markdown format!

Microsoft has released MarkItDown, a lightweight Python library that converts any document to Markdown for use with LLMs.

Key Features:

  • Supports multiple formats: Convert PDFs, Word, Excel, PowerPoint, images, and audio.

  • Auto-extracts metadata: Pulls EXIF data, runs OCR, and generates transcripts.

  • Multiple interfaces: Use via CLI, Python API, or Docker.

  • Describes images: Adds LLM-generated alt text for visuals.

  • Handles batch jobs: Process multiple files at once with ease

This tool streamlines the process of preparing diverse data types for LLM applications, making it easier to integrate various document formats into AI workflows

Stanford published a 1-hour lecture on Agentic AI, a compact yet comprehensive resource for anyone looking to understand or build agent-based LLM systems.

This session breaks down the fundamentals of how modern agents reflect, plan, reason, and interact with tools, ideal for engineers moving beyond basic prompting.

Key Topics Covered:

  • LLM training & optimization: How models are adapted for different tasks

  • Prompt design: Techniques for guiding model behavior effectively

  • Reflection: How agents self-evaluate and iterate on outputs

  • Planning: Structuring multi-step reasoning processes

  • Tool use: Integrating external functions and APIs

  • Agentic principles: What defines an agentic system

  • Hands-on example: Building a customer support AI agent

  • Design patterns: Structuring reliable agent workflows

  • Ethical considerations: Key risks and deployment guidelines

This lecture provides a clear and practical overview of Agentic AI, useful for anyone working on LLM infrastructure or applied AI systems.

That’s a Wrap

That’s all for today. Thank you for reading today’s edition. See you in the next issue with more AI Engineering insights.

PS: We curate this AI Engineering content for free, and your support means everything. If you find value in what you read, consider sharing it with a friend or two.

Your feedback is valuable: If there’s a topic you’re stuck on or curious about, reply to this email. We’re building this for you, and your feedback helps shape what we send.

WORK WITH US

Looking to promote your company, product, or service to 100K+ AI developers? Get in touch today by replying to this email.