From 30f712779b2d405f6fb852e479162f2ac498f5e7 Mon Sep 17 00:00:00 2001 From: kqlio67 Date: Fri, 25 Oct 2024 20:25:47 +0300 Subject: docs(docs/async_client.md): update guide with Anthropic compatibility and improved chat completions example --- docs/async_client.md | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) (limited to 'docs') diff --git a/docs/async_client.md b/docs/async_client.md index 0c296c09..05c7a0b8 100644 --- a/docs/async_client.md +++ b/docs/async_client.md @@ -3,13 +3,14 @@ The G4F async client API is a powerful asynchronous interface for interacting wi ## Compatibility Note -The G4F async client API is designed to be compatible with the OpenAI API, making it easy for developers familiar with OpenAI's interface to transition to G4F. +The G4F async client API is designed to be compatible with the OpenAI and Anthropic API, making it easy for developers familiar with OpenAI's or Anthropic's interface to transition to G4F. ## Table of Contents - [Introduction](#introduction) - [Key Features](#key-features) - [Getting Started](#getting-started) - [Initializing the Client](#initializing-the-client) + - [Creating Chat Completions](#creating-chat-completions) - [Configuration](#configuration) - [Usage Examples](#usage-examples) - [Text Completions](#text-completions) @@ -51,6 +52,30 @@ client = Client( ) ``` + +## Creating Chat Completions +**Here’s an improved example of creating chat completions:** +```python +response = await async_client.chat.completions.create( + system="You are a helpful assistant.", + model="gpt-3.5-turbo", + messages=[ + { + "role": "user", + "content": "Say this is a test" + } + ] + # Add other parameters as needed +) +``` + +**This example:** + - Sets a system message to define the assistant's role + - Asks a specific question `Say this is a test` + - Configures various parameters like temperature and max_tokens for more control over the output + - Disables streaming for a complete response + +You can adjust these parameters based on your specific needs. ### Configuration -- cgit v1.2.3 From 96e1efee0f31fad48dafa417551b31f636609227 Mon Sep 17 00:00:00 2001 From: kqlio67 Date: Fri, 25 Oct 2024 20:29:03 +0300 Subject: docs(docs/client.md): update G4F Client API guide --- docs/client.md | 30 ++++++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) (limited to 'docs') diff --git a/docs/client.md b/docs/client.md index 08445402..9621e3c2 100644 --- a/docs/client.md +++ b/docs/client.md @@ -7,6 +7,7 @@ - [Getting Started](#getting-started) - [Switching to G4F Client](#switching-to-g4f-client) - [Initializing the Client](#initializing-the-client) + - [Creating Chat Completions](#creating-chat-completions) - [Configuration](#configuration) - [Usage Examples](#usage-examples) - [Text Completions](#text-completions) @@ -22,7 +23,7 @@ ## Introduction -Welcome to the G4F Client API, a cutting-edge tool for seamlessly integrating advanced AI capabilities into your Python applications. This guide is designed to facilitate your transition from using the OpenAI client to the G4F Client, offering enhanced features while maintaining compatibility with the existing OpenAI API. +Welcome to the G4F Client API, a cutting-edge tool for seamlessly integrating advanced AI capabilities into your Python applications. This guide is designed to facilitate your transition from using the OpenAI or Anthropic client to the G4F Client, offering enhanced features while maintaining compatibility with the existing OpenAI and Anthropic API. ## Getting Started ### Switching to G4F Client @@ -42,7 +43,7 @@ from g4f.client import Client as OpenAI -The G4F Client preserves the same familiar API interface as OpenAI, ensuring a smooth transition process. +The G4F Client preserves the same familiar API interface as OpenAI or Anthropic, ensuring a smooth transition process. ## Initializing the Client To utilize the G4F Client, create a new instance. **Below is an example showcasing custom providers:** @@ -56,6 +57,30 @@ client = Client( # Add any other necessary parameters ) ``` + +## Creating Chat Completions +**Here’s an improved example of creating chat completions:** +```python +response = client.chat.completions.create( + system="You are a helpful assistant.", + model="gpt-3.5-turbo", + messages=[ + { + "role": "user", + "content": "Say this is a test" + } + ] + # Add any other necessary parameters +) +``` + +**This example:** + - Sets a system message to define the assistant's role + - Asks a specific question `Say this is a test` + - Configures various parameters like temperature and max_tokens for more control over the output + - Disables streaming for a complete response + +You can adjust these parameters based on your specific needs. ## Configuration @@ -271,6 +296,7 @@ while True: try: # Get GPT's response response = client.chat.completions.create( + system="You are a helpful assistant.", messages=messages, model=g4f.models.default, ) -- cgit v1.2.3 From 17384a111da17458a1407926f4ba7c3014e3c476 Mon Sep 17 00:00:00 2001 From: kqlio67 Date: Fri, 25 Oct 2024 20:33:37 +0300 Subject: docs(docs/interference-api.md): update Interference API usage guide --- docs/interference-api.md | 106 +++++++++++++++++++++++++++++++++++------------ 1 file changed, 79 insertions(+), 27 deletions(-) (limited to 'docs') diff --git a/docs/interference-api.md b/docs/interference-api.md index 4050f84f..617df9cd 100644 --- a/docs/interference-api.md +++ b/docs/interference-api.md @@ -1,23 +1,30 @@ - # G4F - Interference API Usage Guide - + ## Table of Contents - [Introduction](#introduction) - [Running the Interference API](#running-the-interference-api) - [From PyPI Package](#from-pypi-package) - [From Repository](#from-repository) - - [Usage with OpenAI Library](#usage-with-openai-library) - - [Usage with Requests Library](#usage-with-requests-library) + - [Using the Interference API](#using-the-interference-api) + - [Basic Usage](#basic-usage) + - [With OpenAI Library](#with-openai-library) + - [With Requests Library](#with-requests-library) - [Key Points](#key-points) + - [Conclusion](#conclusion) + ## Introduction -The Interference API allows you to serve other OpenAI integrations with G4F. It acts as a proxy, translating requests to the OpenAI API into requests to the G4F providers. +The G4F Interference API is a powerful tool that allows you to serve other OpenAI integrations using G4F (Gpt4free). It acts as a proxy, translating requests intended for the OpenAI and Anthropic API into requests compatible with G4F providers. This guide will walk you through the process of setting up, running, and using the Interference API effectively. + ## Running the Interference API +**You can run the Interference API in two ways:** using the PyPI package or from the repository. + ### From PyPI Package -**You can run the Interference API directly from the G4F PyPI package:** +**To run the Interference API directly from the G4F PyPI package, use the following Python code:** + ```python from g4f.api import run_api @@ -25,37 +32,80 @@ run_api() ``` - ### From Repository -Alternatively, you can run the Interference API from the cloned repository. +**If you prefer to run the Interference API from the cloned repository, you have two options:** -**Run the server with:** +1. **Using the command line:** ```bash g4f api ``` -or + +2. **Using Python:** ```bash python -m g4f.api.run ``` +**Once running, the API will be accessible at:** `http://localhost:1337/v1` -## Usage with OpenAI Library +## Using the Interference API - +### Basic Usage +**You can interact with the Interference API using curl commands for both text and image generation:** +**For text generation:** +```bash +curl -X POST "http://localhost:1337/v1/chat/completions" \ + -H "Content-Type: application/json" \ + -d '{ + "messages": [ + { + "role": "user", + "content": "Hello" + } + ], + "model": "gpt-3.5-turbo" + }' +``` + +**For image generation:** +1. **url:** +```bash +curl -X POST "http://localhost:1337/v1/images/generate" \ + -H "Content-Type: application/json" \ + -d '{ + "prompt": "a white siamese cat", + "model": "dall-e-3", + "response_format": "url" + }' +``` + +2. **b64_json** +```bash +curl -X POST "http://localhost:1337/v1/images/generate" \ + -H "Content-Type: application/json" \ + -d '{ + "prompt": "a white siamese cat", + "model": "dall-e-3", + "response_format": "b64_json" + }' +``` + + +### With OpenAI Library + +**You can use the Interference API with the OpenAI Python library by changing the `base_url`:** ```python from openai import OpenAI client = OpenAI( api_key="", - # Change the API base URL to the local interference API - base_url="http://localhost:1337/v1" + base_url="http://localhost:1337/v1" ) response = client.chat.completions.create( model="gpt-3.5-turbo", - messages=[{"role": "user", "content": "write a poem about a tree"}], + messages=[{"role": "user", "content": "Write a poem about a tree"}], stream=True, ) @@ -68,20 +118,20 @@ else: content = token.choices[0].delta.content if content is not None: print(content, end="", flush=True) -``` +``` -## Usage with Requests Library -You can also send requests directly to the Interference API using the requests library. +### With Requests Library -**Send a POST request to `/v1/chat/completions` with the request body containing the model and other parameters:** +**You can also send requests directly to the Interference API using the `requests` library:** ```python import requests url = "http://localhost:1337/v1/chat/completions" + body = { - "model": "gpt-3.5-turbo", + "model": "gpt-3.5-turbo", "stream": False, "messages": [ {"role": "assistant", "content": "What can you do?"} @@ -92,18 +142,20 @@ json_response = requests.post(url, json=body).json().get('choices', []) for choice in json_response: print(choice.get('message', {}).get('content', '')) -``` - +``` ## Key Points -- The Interference API translates OpenAI API requests into G4F provider requests -- You can run it from the PyPI package or the cloned repository -- It supports usage with the OpenAI Python library by changing the `base_url` -- Direct requests can be sent to the API endpoints using libraries like `requests` + - The Interference API translates OpenAI API requests into G4F provider requests. + - It can be run from either the PyPI package or the cloned repository. + - The API supports usage with the OpenAI Python library by changing the `base_url`. + - Direct requests can be sent to the API endpoints using libraries like `requests`. + - Both text and image generation are supported. -**_The Interference API allows easy integration of G4F with existing OpenAI-based applications and tools._** +## Conclusion +The G4F Interference API provides a seamless way to integrate G4F with existing OpenAI-based applications and tools. By following this guide, you should now be able to set up, run, and use the Interference API effectively. Whether you're using it for text generation, image creation, or as a drop-in replacement for OpenAI in your projects, the Interference API offers flexibility and power for your AI-driven applications. + --- -- cgit v1.2.3 From 7fba6f59a70bf22bb312d007f218113a6090ff31 Mon Sep 17 00:00:00 2001 From: kqlio67 Date: Sat, 26 Oct 2024 18:52:15 +0300 Subject: Update (docs/providers-and-models.md) --- docs/providers-and-models.md | 2 ++ 1 file changed, 2 insertions(+) (limited to 'docs') diff --git a/docs/providers-and-models.md b/docs/providers-and-models.md index a6d7ec4b..4bb22db4 100644 --- a/docs/providers-and-models.md +++ b/docs/providers-and-models.md @@ -51,6 +51,7 @@ This document provides an overview of various AI providers and models, including |[free.netfly.top](https://free.netfly.top)|`g4f.Provider.FreeNetfly`|✔|❌|❌|?|![Cloudflare](https://img.shields.io/badge/Cloudflare-f48d37)|❌| |[gemini.google.com](https://gemini.google.com)|`g4f.Provider.Gemini`|✔|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|✔| |[ai.google.dev](https://ai.google.dev)|`g4f.Provider.GeminiPro`|✔|❌|✔|?|![Active](https://img.shields.io/badge/Active-brightgreen)|✔| +|[app.giz.ai](https://app.giz.ai/assistant/)|`g4f.Provider.GizAI`|`gemini-flash, gemini-pro, gpt-4o-mini, gpt-4o, claude-3.5-sonnet, claude-3-haiku, llama-3.1-70b, llama-3.1-8b, mistral-large`|`sdxl, sd-1.5, sd-3.5, dalle-3, flux-schnell, flux1-pro`|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| |[developers.sber.ru](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|✔| |[gprochat.com](https://gprochat.com)|`g4f.Provider.GPROChat`|`gemini-pro`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| |[console.groq.com/playground](https://console.groq.com/playground)|`g4f.Provider.Groq`|✔|❌|❌|?|![Active](https://img.shields.io/badge/Active-brightgreen)|✔| @@ -201,6 +202,7 @@ This document provides an overview of various AI providers and models, including |sdxl-turbo|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/sdxl-turbo)| |sd-1.5|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/runwayml/stable-diffusion-v1-5)| |sd-3|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3)| +|sd-3.5|Stability AI|1+ Providers|[stability.ai](https://stability.ai/news/introducing-stable-diffusion-3-5)| |playground-v2.5|Playground AI|1+ Providers|[huggingface.co](https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic)| |flux|Black Forest Labs|2+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)| |flux-pro|Black Forest Labs|2+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)| -- cgit v1.2.3 From 93881efecbf90b194e1dd30afe4ceb4004999809 Mon Sep 17 00:00:00 2001 From: kqlio67 Date: Sat, 26 Oct 2024 19:33:38 +0300 Subject: feat(docs/providers-and-models.md): add GizAI provider with multiple models --- docs/providers-and-models.md | 31 +++++++++++++++---------------- 1 file changed, 15 insertions(+), 16 deletions(-) (limited to 'docs') diff --git a/docs/providers-and-models.md b/docs/providers-and-models.md index 4bb22db4..18a36630 100644 --- a/docs/providers-and-models.md +++ b/docs/providers-and-models.md @@ -109,18 +109,18 @@ This document provides an overview of various AI providers and models, including |-------|---------------|-----------|---------| |gpt-3|OpenAI|1+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-base)| |gpt-3.5-turbo|OpenAI|5+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-3-5-turbo)| -|gpt-4|OpenAI|33+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)| -|gpt-4-turbo|OpenAI|2+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)| -|gpt-4o|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)| +|gpt-4|OpenAI|9+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)| +|gpt-4-turbo|OpenAI|3+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)| +|gpt-4o|OpenAI|10+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)| |gpt-4o-mini|OpenAI|14+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)| |o1|OpenAI|1+ Providers|[platform.openai.com](https://openai.com/index/introducing-openai-o1-preview/)| -|o1-mini|OpenAI|1+ Providers|[platform.openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)| +|o1-mini|OpenAI|2+ Providers|[platform.openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)| |llama-2-7b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-2-7b)| |llama-2-13b|Meta Llama|1+ Providers|[llama.com](https://www.llama.com/llama2/)| |llama-3-8b|Meta Llama|4+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)| |llama-3-70b|Meta Llama|4+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)| |llama-3.1-8b|Meta Llama|7+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)| -|llama-3.1-70b|Meta Llama|13+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)| +|llama-3.1-70b|Meta Llama|14+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)| |llama-3.1-405b|Meta Llama|5+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)| |llama-3.2-1b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-1B)| |llama-3.2-3b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/blog/llama32)| @@ -128,17 +128,17 @@ This document provides an overview of various AI providers and models, including |llama-3.2-90b|Meta Llama|2+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)| |llamaguard-7b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/LlamaGuard-7b)| |llamaguard-2-8b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B)| -|mistral-7b|Mistral AI|5+ Providers|[mistral.ai](https://mistral.ai/news/announcing-mistral-7b/)| +|mistral-7b|Mistral AI|4+ Providers|[mistral.ai](https://mistral.ai/news/announcing-mistral-7b/)| |mixtral-8x7b|Mistral AI|6+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)| |mixtral-8x22b|Mistral AI|3+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-8x22b/)| -|mistral-nemo|Mistral AI|1+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)| -|mistral-large|Mistral AI|1+ Providers|[mistral.ai](https://mistral.ai/news/mistral-large-2407/)| +|mistral-nemo|Mistral AI|2+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)| +|mistral-large|Mistral AI|2+ Providers|[mistral.ai](https://mistral.ai/news/mistral-large-2407/)| |mixtral-8x7b-dpo|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)| |yi-34b|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)| -|hermes-3|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)| +|hermes-3|NousResearch|2+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)| |gemini|Google DeepMind|1+ Providers|[deepmind.google](http://deepmind.google/technologies/gemini/)| -|gemini-flash|Google DeepMind|3+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)| -|gemini-pro|Google DeepMind|9+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)| +|gemini-flash|Google DeepMind|4+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)| +|gemini-pro|Google DeepMind|10+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)| |gemma-2b|Google|5+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2b)| |gemma-2b-9b|Google|1+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2-9b)| |gemma-2b-27b|Google|2+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2-27b)| @@ -146,10 +146,10 @@ This document provides an overview of various AI providers and models, including |gemma-2|Google|2+ Providers|[huggingface.co](https://huggingface.co/blog/gemma2)| |gemma_2_27b|Google|1+ Providers|[huggingface.co](https://huggingface.co/blog/gemma2)| |claude-2.1|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-2)| -|claude-3-haiku|Anthropic|3+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)| +|claude-3-haiku|Anthropic|4+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)| |claude-3-sonnet|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)| |claude-3-opus|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)| -|claude-3.5-sonnet|Anthropic|5+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)| +|claude-3.5-sonnet|Anthropic|6+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)| |blackboxai|Blackbox AI|2+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)| |blackboxai-pro|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)| |yi-1.5-9b|01-ai|1+ Providers|[huggingface.co](https://huggingface.co/01-ai/Yi-1.5-9B)| @@ -197,7 +197,7 @@ This document provides an overview of various AI providers and models, including ### Image Models | Model | Base Provider | Providers | Website | |-------|---------------|-----------|---------| -|sdxl|Stability AI|2+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl)| +|sdxl|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl)| |sdxl-lora|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/blog/lcm_lora)| |sdxl-turbo|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/sdxl-turbo)| |sd-1.5|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/runwayml/stable-diffusion-v1-5)| @@ -212,10 +212,9 @@ This document provides an overview of various AI providers and models, including |flux-disney|Flux AI|1+ Providers|[]()| |flux-pixel|Flux AI|1+ Providers|[]()| |flux-4o|Flux AI|1+ Providers|[]()| -|flux-schnell|Black Forest Labs|1+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)| +|flux-schnell|Black Forest Labs|2+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)| |dalle|OpenAI|1+ Providers|[openai.com](https://openai.com/index/dall-e/)| |dalle-2|OpenAI|1+ Providers|[openai.com](https://openai.com/index/dall-e-2/)| -|dalle-3|OpenAI|2+ Providers|[openai.com](https://openai.com/index/dall-e-3/)| |emi||1+ Providers|[]()| |any-dark||1+ Providers|[]()| |midjourney|Midjourney|1+ Providers|[docs.midjourney.com](https://docs.midjourney.com/docs/model-versions)| -- cgit v1.2.3 From 8768a057534b91e463f428fb91f301325110415c Mon Sep 17 00:00:00 2001 From: kqlio67 Date: Sun, 27 Oct 2024 20:14:45 +0200 Subject: Update (docs/providers-and-models.md g4f/models.py g4f/Provider/nexra/) --- docs/providers-and-models.md | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-) (limited to 'docs') diff --git a/docs/providers-and-models.md b/docs/providers-and-models.md index 18a36630..b3dbd9f1 100644 --- a/docs/providers-and-models.md +++ b/docs/providers-and-models.md @@ -64,10 +64,7 @@ This document provides an overview of various AI providers and models, including |[app.myshell.ai/chat](https://app.myshell.ai/chat)|`g4f.Provider.MyShell`|✔|❌|?|?|![Disabled](https://img.shields.io/badge/Disabled-red)|❌| |[nexra.aryahcr.cc/bing](https://nexra.aryahcr.cc/documentation/bing/en)|`g4f.Provider.NexraBing`|✔|❌|❌|✔|![Disabled](https://img.shields.io/badge/Disabled-red)|❌| |[nexra.aryahcr.cc/blackbox](https://nexra.aryahcr.cc/documentation/blackbox/en)|`g4f.Provider.NexraBlackbox`|`blackboxai` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| -|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGPT`|`gpt-4, gpt-3.5-turbo, gpt-3` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| -|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGPT4o`|`gpt-4o` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| -|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGptV2`|`gpt-4` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| -|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGptWeb`|`gpt-4` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| +|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGPT`|`gpt-4, gpt-3.5-turbo, gpt-3, gpt-4o` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| |[nexra.aryahcr.cc/dall-e](https://nexra.aryahcr.cc/documentation/dall-e/en)|`g4f.Provider.NexraDallE`|❌|`dalle`|❌|❌|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| |[nexra.aryahcr.cc/dall-e](https://nexra.aryahcr.cc/documentation/dall-e/en)|`g4f.Provider.NexraDallE2`|❌|`dalle-2`|❌|❌|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| |[nexra.aryahcr.cc/emi](https://nexra.aryahcr.cc/documentation/emi/en)|`g4f.Provider.NexraEmi`|❌|`emi`|❌|❌|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| @@ -109,7 +106,7 @@ This document provides an overview of various AI providers and models, including |-------|---------------|-----------|---------| |gpt-3|OpenAI|1+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-base)| |gpt-3.5-turbo|OpenAI|5+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-3-5-turbo)| -|gpt-4|OpenAI|9+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)| +|gpt-4|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)| |gpt-4-turbo|OpenAI|3+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)| |gpt-4o|OpenAI|10+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)| |gpt-4o-mini|OpenAI|14+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)| -- cgit v1.2.3 From 61e74deb0f5cfbbeb7ebf909fe7fef7ac44baa5e Mon Sep 17 00:00:00 2001 From: kqlio67 Date: Mon, 28 Oct 2024 10:30:56 +0200 Subject: docs(docs/async_client.md): update G4F async client API guide --- docs/async_client.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) (limited to 'docs') diff --git a/docs/async_client.md b/docs/async_client.md index 05c7a0b8..357b0d86 100644 --- a/docs/async_client.md +++ b/docs/async_client.md @@ -189,7 +189,7 @@ async def main(): response = await client.images.async_generate( prompt="a white siamese cat", - model="dall-e-3" + model="flux" ) image_url = response.data[0].url @@ -210,7 +210,7 @@ async def main(): response = await client.images.async_generate( prompt="a white siamese cat", - model="dall-e-3", + model="flux", response_format="b64_json" ) @@ -242,7 +242,7 @@ async def main(): ) task2 = client.images.async_generate( - model="dall-e-3", + model="flux", prompt="a white siamese cat" ) -- cgit v1.2.3 From 72e8152853386bc40842c8150187b9b0a38426af Mon Sep 17 00:00:00 2001 From: kqlio67 Date: Mon, 28 Oct 2024 10:36:45 +0200 Subject: feat(docs/client.md): add base64 response format for image generation --- docs/client.md | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) (limited to 'docs') diff --git a/docs/client.md b/docs/client.md index 9621e3c2..b4f351d3 100644 --- a/docs/client.md +++ b/docs/client.md @@ -154,7 +154,7 @@ from g4f.client import Client client = Client() response = client.images.generate( - model="dall-e-3", + model="flux", prompt="a white siamese cat" # Add any other necessary parameters ) @@ -164,6 +164,23 @@ image_url = response.data[0].url print(f"Generated image URL: {image_url}") ``` + +#### Base64 Response Format +```python +from g4f.client import Client + +client = Client() + +response = client.images.generate( + model="flux", + prompt="a white siamese cat", + response_format="b64_json" +) + +base64_text = response.data[0].b64_json +print(base64_text) +``` + ### Creating Image Variations -- cgit v1.2.3 From 50da35e5b3dcae4619642844266c50083f30e0b7 Mon Sep 17 00:00:00 2001 From: kqlio67 Date: Mon, 28 Oct 2024 10:43:04 +0200 Subject: docs(docs/interference-api.md): update image generation model in usage guide --- docs/interference-api.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'docs') diff --git a/docs/interference-api.md b/docs/interference-api.md index 617df9cd..1e51ba60 100644 --- a/docs/interference-api.md +++ b/docs/interference-api.md @@ -75,7 +75,7 @@ curl -X POST "http://localhost:1337/v1/images/generate" \ -H "Content-Type: application/json" \ -d '{ "prompt": "a white siamese cat", - "model": "dall-e-3", + "model": "flux", "response_format": "url" }' ``` @@ -86,7 +86,7 @@ curl -X POST "http://localhost:1337/v1/images/generate" \ -H "Content-Type: application/json" \ -d '{ "prompt": "a white siamese cat", - "model": "dall-e-3", + "model": "flux", "response_format": "b64_json" }' ``` -- cgit v1.2.3 From e79c8b01f58d21502c962f38c804bf81196f89fb Mon Sep 17 00:00:00 2001 From: kqlio67 Date: Tue, 29 Oct 2024 22:03:05 +0200 Subject: Update (docs/async_client.md docs/client.md docs/interference-api.md g4f/client/client.py) --- docs/async_client.md | 4 +--- docs/client.md | 7 ++----- docs/interference-api.md | 2 +- 3 files changed, 4 insertions(+), 9 deletions(-) (limited to 'docs') diff --git a/docs/async_client.md b/docs/async_client.md index 357b0d86..0719a463 100644 --- a/docs/async_client.md +++ b/docs/async_client.md @@ -3,7 +3,7 @@ The G4F async client API is a powerful asynchronous interface for interacting wi ## Compatibility Note -The G4F async client API is designed to be compatible with the OpenAI and Anthropic API, making it easy for developers familiar with OpenAI's or Anthropic's interface to transition to G4F. +The G4F async client API is designed to be compatible with the OpenAI API, making it easy for developers familiar with OpenAI's interface to transition to G4F. ## Table of Contents - [Introduction](#introduction) @@ -57,7 +57,6 @@ client = Client( **Here’s an improved example of creating chat completions:** ```python response = await async_client.chat.completions.create( - system="You are a helpful assistant.", model="gpt-3.5-turbo", messages=[ { @@ -70,7 +69,6 @@ response = await async_client.chat.completions.create( ``` **This example:** - - Sets a system message to define the assistant's role - Asks a specific question `Say this is a test` - Configures various parameters like temperature and max_tokens for more control over the output - Disables streaming for a complete response diff --git a/docs/client.md b/docs/client.md index b4f351d3..388b2e4b 100644 --- a/docs/client.md +++ b/docs/client.md @@ -23,7 +23,7 @@ ## Introduction -Welcome to the G4F Client API, a cutting-edge tool for seamlessly integrating advanced AI capabilities into your Python applications. This guide is designed to facilitate your transition from using the OpenAI or Anthropic client to the G4F Client, offering enhanced features while maintaining compatibility with the existing OpenAI and Anthropic API. +Welcome to the G4F Client API, a cutting-edge tool for seamlessly integrating advanced AI capabilities into your Python applications. This guide is designed to facilitate your transition from using the OpenAI client to the G4F Client, offering enhanced features while maintaining compatibility with the existing OpenAI API. ## Getting Started ### Switching to G4F Client @@ -43,7 +43,7 @@ from g4f.client import Client as OpenAI -The G4F Client preserves the same familiar API interface as OpenAI or Anthropic, ensuring a smooth transition process. +The G4F Client preserves the same familiar API interface as OpenAI, ensuring a smooth transition process. ## Initializing the Client To utilize the G4F Client, create a new instance. **Below is an example showcasing custom providers:** @@ -62,7 +62,6 @@ client = Client( **Here’s an improved example of creating chat completions:** ```python response = client.chat.completions.create( - system="You are a helpful assistant.", model="gpt-3.5-turbo", messages=[ { @@ -75,7 +74,6 @@ response = client.chat.completions.create( ``` **This example:** - - Sets a system message to define the assistant's role - Asks a specific question `Say this is a test` - Configures various parameters like temperature and max_tokens for more control over the output - Disables streaming for a complete response @@ -313,7 +311,6 @@ while True: try: # Get GPT's response response = client.chat.completions.create( - system="You are a helpful assistant.", messages=messages, model=g4f.models.default, ) diff --git a/docs/interference-api.md b/docs/interference-api.md index 1e51ba60..2e18e7b5 100644 --- a/docs/interference-api.md +++ b/docs/interference-api.md @@ -15,7 +15,7 @@ ## Introduction -The G4F Interference API is a powerful tool that allows you to serve other OpenAI integrations using G4F (Gpt4free). It acts as a proxy, translating requests intended for the OpenAI and Anthropic API into requests compatible with G4F providers. This guide will walk you through the process of setting up, running, and using the Interference API effectively. +The G4F Interference API is a powerful tool that allows you to serve other OpenAI integrations using G4F (Gpt4free). It acts as a proxy, translating requests intended for the OpenAI API into requests compatible with G4F providers. This guide will walk you through the process of setting up, running, and using the Interference API effectively. ## Running the Interference API -- cgit v1.2.3