Gemini Batch API now supports Embeddings and OpenAI Compatibility

Date:

Share post:

Batch API now supports Embeddings and OpenAI Compatibility

Today we are extending the Gemini Batch API to support the newly launched Gemini Embedding model as well as offering developers the ability to leverage the OpenAI SDK to submit and process batches.

This builds on the initial launch of the Gemini Batch API – which enables asynchronous processing at 50% lower rates for high volume and latency tolerant use cases.

Batch API Embedding Support

Our new Gemini Embedding Model is already being used for thousands of production deployments. And now, you can leverage the model with the Batch API at much higher rate limits and at half the price – $0.075 per 1M input tokens – to unlock even more cost sensitive, latency tolerant, or asynchronous use cases.

Get started with Batch Embeddings with only a few lines of code:

# Create a JSONL with your requests:
# {"key": "request_1", "request": {"output_dimensionality": 512, "content": {"parts": [{"text": "Explain GenAI"}]}}}
# {"key": "request_2", "request": {"output_dimensionality": 512, "content": {"parts": [{"text": "Explain quantum computing"}]}}}

from google import genai

client = genai.Client()

uploaded_batch_requests = client.files.upload(file='embedding_requests.jsonl')

batch_job = client.batches.create_embeddings(
    model="gemini-embedding-001",
    src={"file_name": uploaded_batch_requests.name}
)


print(f"Created embedding batch job: {batch_job.name}")

# Wait for up to 24 hours

if batch_job.state.name == 'JOB_STATE_SUCCEEDED':
    result_file_name = batch_job.dest.file_name
    file_content_bytes = client.files.download(file=result_file_name)
    file_content = file_content_bytes.decode('utf-8')

    for line in file_content.splitlines():
        print(line)

Python

For more informations and examples go to:

OpenAI compatibility for Batch API

Switching to Gemini Batch API is now as easy as updating a few lines of code if you use the OpenAI SDK compatibility layer:

from openai import OpenAI

openai_client = OpenAI(
    api_key="GEMINI_API_KEY",
    base_url="https://generativelanguage.googleapis.com/v1beta/openai/"
)

# Upload JSONL file in OpenAI batch input format...

# Create batch
batch = openai_client.batches.create(
    input_file_id=batch_input_file_id,
    endpoint="/v1/chat/completions",
    completion_window="24h"
)

# Wait for up to 24 hours & poll for status
batch = openai_client.batches.retrieve(batch.id)

if batch.status == "completed":
    # Download results...

Python

You can read more about the OpenAI Compatibility layer and batch support in our documentation.

We are continuously expanding our batch offering to further optimize the cost of using Gemini API, so keep an eye out for further updates. In the meantime, happy building!

Source link

spot_img

Related articles

Matrix Push C2 Uses Browser Notifications for Fileless, Cross-Platform Phishing Attacks

Bad actors are leveraging browser notifications as a vector for phishing attacks to distribute malicious links by means...

The New Framework Laptop 16 Has An Upgradable GPU!

A Big Change From The FrameWork Laptop 13 Ars Technica got their hands on the all new FrameWork Laptop...

Fragments Nov 19

I’ve been on the road in Europe for the last couple of weeks, and while I was there...

Logitech Promo Code: $25 Off This Holiday Season

A leader in almost everything tech and home-office related for over 40 years, Swiss-founded Logitech offers a vast...