![Oracle Drags Its Feet in the JavaScript Trademark Dispute](https://cdn.sanity.io/images/cgdhsj6q/production/919c3b22c24f93884c548d60cbb338e819ff2435-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
create-llama
Advanced tools
The easiest way to get started with LlamaIndex is by using create-llama
. This CLI tool enables you to quickly start building a new LlamaIndex application, with everything set up for you.
Just run
npx create-llama@latest
to get started, or watch this video for a demo session:
https://github.com/user-attachments/assets/dd3edc36-4453-4416-91c2-d24326c6c167
Once your app is generated, run
npm run dev
to start the development server. You can then visit http://localhost:3000 to see your app.
Here's how it looks like:
https://github.com/user-attachments/assets/d57af1a1-d99b-4e9c-98d9-4cbd1327eff8
You can supply your own data; the app will index it and answer questions. Your generated app will have a folder called data
(If you're using Express or Python and generate a frontend, it will be ./backend/data
).
The app will ingest any supported files you put in this directory. Your Next.js and Express apps use LlamaIndex.TS so they will be able to ingest any PDF, text, CSV, Markdown, Word and HTML files. The Python backend can read even more types, including video and audio files.
Before you can use your data, you need to index it. If you're using the Next.js or Express apps, run:
npm run generate
Then re-start your app. Remember you'll need to re-run generate
if you add new files to your data
folder.
If you're using the Python backend, you can trigger indexing of your data by calling:
poetry run generate
Optionally generate a frontend if you've selected the Python or Express back-ends. If you do so, create-llama
will generate two folders: frontend
, for your Next.js-based frontend code, and backend
containing your API.
The app will default to OpenAI's gpt-4o-mini
LLM and text-embedding-3-large
embedding model.
If you want to use different OpenAI models, add the --ask-models
CLI parameter.
You can also replace OpenAI with one of our dozens of other supported LLMs.
To do so, you have to manually change the generated code (edit the settings.ts
file for Typescript projects or the settings.py
file for Python projects)
The simplest thing to do is run create-llama
in interactive mode:
npx create-llama@latest
# or
npm create llama@latest
# or
yarn create llama
# or
pnpm create llama@latest
You will be asked for the name of your project, along with other configuration options, something like this:
>> npm create llama@latest
Need to install the following packages:
create-llama@latest
Ok to proceed? (y) y
✔ What is your project named? … my-app
✔ Which template would you like to use? › Agentic RAG (e.g. chat with docs)
✔ Which framework would you like to use? › NextJS
✔ Would you like to set up observability? › No
✔ Please provide your OpenAI API key (leave blank to skip): …
✔ Which data source would you like to use? › Use an example PDF
✔ Would you like to add another data source? › No
✔ Would you like to use LlamaParse (improved parser for RAG - requires API key)? … no / yes
✔ Would you like to use a vector database? › No, just store the data in the file system
✔ Would you like to build an agent using tools? If so, select the tools here, otherwise just press enter › Weather
? How would you like to proceed? › - Use arrow-keys. Return to submit.
Just generate code (~1 sec)
❯ Start in VSCode (~1 sec)
Generate code and install dependencies (~2 min)
Generate code, install dependencies, and run the app (~2 min)
You can also pass command line arguments to set up a new project
non-interactively. See create-llama --help
:
create-llama <project-directory> [options]
Options:
-V, --version output the version number
--use-npm
Explicitly tell the CLI to bootstrap the app using npm
--use-pnpm
Explicitly tell the CLI to bootstrap the app using pnpm
--use-yarn
Explicitly tell the CLI to bootstrap the app using Yarn
Inspired by and adapted from create-next-app
0.2.15
FAQs
Create LlamaIndex-powered apps with one command
The npm package create-llama receives a total of 196 weekly downloads. As such, create-llama popularity was classified as not popular.
We found that create-llama demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.