Archon: AI代理构建AI!

Archon是一个创新的AI元代理系统,其核心理念是通过抓取各种框架的文档,来构建能够使用这些框架的AI代理。作为世界上第一个”Agenteer”(代理工程师),它能够自主构建、改进和优化其他AI代理。

Archon的核心是通过抓取各种框架的文档,用来构建能够使用这些框架的AI代理。

https://github.com/coleam00/Archon

优势

  • 多种部署选项:Docker容器或本地Python安装
  • 向量数据库支持:使用Supabase存储文档向量
  • 多种LLM支持:OpenAI/Anthropic/OpenRouter API或本地LLM(Ollama)

核心解读:Supabase

Supabase是一个开源的后端即服务(BaaS)平台,旨在简化应用程序开发过程。在Archon中,它主要用于存储和检索向量化的框架文档,为代理提供知识基础。

本地部署

Docker方式

git clone https://github.com/coleam00/archon.git
cd archon
python run_docker.py

部署完成后访问 http://localhost:8501

Python方式

git clone https://github.com/coleam00/archon.git
cd archon
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -r requirements.txt
streamlit run streamlit_ui.py

核心解读:IDE

AI IDE是集成了人工智能功能的开发环境,比如Windsurf、Cursor和Cline,可以提供代码补全、生成代码建议等智能功能。

部署环境

完成安装后,需要在Streamlit UI配置以下环境:

  1. 环境配置:设置API密钥和模型选择
  2. 数据库设置:配置Supabase向量数据库
  3. 文档抓取:爬取Pydantic AI文档,建立知识库
  4. 代理服务:启动生成代理的服务

核心解读:MCP

MCP是”Model Context Protocol”(模型上下文协议),这是一种允许AI IDE与外部AI服务通信的标准化协议。通过MCP配置,可以:

  • 在你喜欢的AI编辑器中工作
  • 直接从编辑器调用Archon功能
  • 让Archon为你生成代理代码,并直接集成到项目中

使用方法

完成所有配置后,可以:

  1. 转到聊天选项卡
  2. 描述你想要构建的代理
  3. Archon会规划并生成必要的代码

如果配置了MCP,还可以直接在AI IDE中使用Archon的功能,让Archon为你生成代理代码,并将其集成到你的项目中。

总结

Archon通过创新的文档抓取和向量化方法,展示了元学习在AI系统构建中的潜力。它不仅是一个实用的开发工具,也是一个展示智能代理系统演化的教育框架。随着其不断迭代完善,Archon有望成为AI代理开发生态系统的重要组成部分。

DeepSeek多模态模型本地部署测试

Janus-Pro

:adoresever

显存要求:1B 识别图片:6G 生成图片9G

1.创建环境并安装基础依赖

# 创建Python环境(建议3.8以上)
conda create -n janus python=3.8
conda activate janus

# 克隆项目代码
git clone https://github.com/deepseek-ai/Janus.git
cd Janus

# 安装基础依赖
pip install -e .

# 安装Gradio界面依赖(如果需要UI界面)
pip install -e .[gradio]

2.运行Demo(建议修改为1b模型)

# 启动
python demo/app_januspro.py
import gradio as gr
import torch
from transformers import AutoConfig, AutoModelForCausalLM
from janus.models import MultiModalityCausalLM, VLChatProcessor
from janus.utils.io import load_pil_images
from PIL import Image

import numpy as np
import os
# import spaces  # Import spaces for ZeroGPU compatibility


# Load model and processor
model_path = "deepseek-ai/Janus-Pro-1B"
config = AutoConfig.from_pretrained(model_path)
language_config = config.language_config
language_config._attn_implementation = 'eager'
vl_gpt = AutoModelForCausalLM.from_pretrained(model_path,
                                             language_config=language_config,
                                             trust_remote_code=True)
if torch.cuda.is_available():
    vl_gpt = vl_gpt.to(torch.bfloat16).cuda()
else:
    vl_gpt = vl_gpt.to(torch.float16)

vl_chat_processor = VLChatProcessor.from_pretrained(model_path)
tokenizer = vl_chat_processor.tokenizer
cuda_device = 'cuda' if torch.cuda.is_available() else 'cpu'

@torch.inference_mode()
# @spaces.GPU(duration=120) 
# Multimodal Understanding function
def multimodal_understanding(image, question, seed, top_p, temperature):
    # Clear CUDA cache before generating
    torch.cuda.empty_cache()
    
    # set seed
    torch.manual_seed(seed)
    np.random.seed(seed)
    torch.cuda.manual_seed(seed)
    
    conversation = [
        {
            "role": "<|User|>",
            "content": f"<image_placeholder>\n{question}",
            "images": [image],
        },
        {"role": "<|Assistant|>", "content": ""},
    ]
    
    pil_images = [Image.fromarray(image)]
    prepare_inputs = vl_chat_processor(
        conversations=conversation, images=pil_images, force_batchify=True
    ).to(cuda_device, dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float16)
    
    
    inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
    
    outputs = vl_gpt.language_model.generate(
        inputs_embeds=inputs_embeds,
        attention_mask=prepare_inputs.attention_mask,
        pad_token_id=tokenizer.eos_token_id,
        bos_token_id=tokenizer.bos_token_id,
        eos_token_id=tokenizer.eos_token_id,
        max_new_tokens=512,
        do_sample=False if temperature == 0 else True,
        use_cache=True,
        temperature=temperature,
        top_p=top_p,
    )
    
    answer = tokenizer.decode(outputs[0].cpu().tolist(), skip_special_tokens=True)
    return answer


def generate(input_ids,
             width,
             height,
             temperature: float = 1,
             parallel_size: int = 5,
             cfg_weight: float = 5,
             image_token_num_per_image: int = 576,
             patch_size: int = 16):
    # Clear CUDA cache before generating
    torch.cuda.empty_cache()
    
    tokens = torch.zeros((parallel_size * 2, len(input_ids)), dtype=torch.int).to(cuda_device)
    for i in range(parallel_size * 2):
        tokens[i, :] = input_ids
        if i % 2 != 0:
            tokens[i, 1:-1] = vl_chat_processor.pad_id
    inputs_embeds = vl_gpt.language_model.get_input_embeddings()(tokens)
    generated_tokens = torch.zeros((parallel_size, image_token_num_per_image), dtype=torch.int).to(cuda_device)

    pkv = None
    for i in range(image_token_num_per_image):
        with torch.no_grad():
            outputs = vl_gpt.language_model.model(inputs_embeds=inputs_embeds,
                                                use_cache=True,
                                                past_key_values=pkv)
            pkv = outputs.past_key_values
            hidden_states = outputs.last_hidden_state
            logits = vl_gpt.gen_head(hidden_states[:, -1, :])
            logit_cond = logits[0::2, :]
            logit_uncond = logits[1::2, :]
            logits = logit_uncond + cfg_weight * (logit_cond - logit_uncond)
            probs = torch.softmax(logits / temperature, dim=-1)
            next_token = torch.multinomial(probs, num_samples=1)
            generated_tokens[:, i] = next_token.squeeze(dim=-1)
            next_token = torch.cat([next_token.unsqueeze(dim=1), next_token.unsqueeze(dim=1)], dim=1).view(-1)

            img_embeds = vl_gpt.prepare_gen_img_embeds(next_token)
            inputs_embeds = img_embeds.unsqueeze(dim=1)

    

    patches = vl_gpt.gen_vision_model.decode_code(generated_tokens.to(dtype=torch.int),
                                                 shape=[parallel_size, 8, width // patch_size, height // patch_size])

    return generated_tokens.to(dtype=torch.int), patches

def unpack(dec, width, height, parallel_size=5):
    dec = dec.to(torch.float32).cpu().numpy().transpose(0, 2, 3, 1)
    dec = np.clip((dec + 1) / 2 * 255, 0, 255)

    visual_img = np.zeros((parallel_size, width, height, 3), dtype=np.uint8)
    visual_img[:, :, :] = dec

    return visual_img



@torch.inference_mode()
# @spaces.GPU(duration=120)  # Specify a duration to avoid timeout
def generate_image(prompt,
                   seed=None,
                   guidance=5,
                   t2i_temperature=1.0):
    # Clear CUDA cache and avoid tracking gradients
    torch.cuda.empty_cache()
    # Set the seed for reproducible results
    if seed is not None:
        torch.manual_seed(seed)
        torch.cuda.manual_seed(seed)
        np.random.seed(seed)
    width = 384
    height = 384
    parallel_size = 5
    
    with torch.no_grad():
        messages = [{'role': '<|User|>', 'content': prompt},
                    {'role': '<|Assistant|>', 'content': ''}]
        text = vl_chat_processor.apply_sft_template_for_multi_turn_prompts(conversations=messages,
                                                                   sft_format=vl_chat_processor.sft_format,
                                                                   system_prompt='')
        text = text + vl_chat_processor.image_start_tag
        
        input_ids = torch.LongTensor(tokenizer.encode(text))
        output, patches = generate(input_ids,
                                   width // 16 * 16,
                                   height // 16 * 16,
                                   cfg_weight=guidance,
                                   parallel_size=parallel_size,
                                   temperature=t2i_temperature)
        images = unpack(patches,
                        width // 16 * 16,
                        height // 16 * 16,
                        parallel_size=parallel_size)

        return [Image.fromarray(images[i]).resize((768, 768), Image.LANCZOS) for i in range(parallel_size)]
        

# Gradio interface
with gr.Blocks() as demo:
    gr.Markdown(value="# Multimodal Understanding")
    with gr.Row():
        image_input = gr.Image()
        with gr.Column():
            question_input = gr.Textbox(label="Question")
            und_seed_input = gr.Number(label="Seed", precision=0, value=42)
            top_p = gr.Slider(minimum=0, maximum=1, value=0.95, step=0.05, label="top_p")
            temperature = gr.Slider(minimum=0, maximum=1, value=0.1, step=0.05, label="temperature")
        
    understanding_button = gr.Button("Chat")
    understanding_output = gr.Textbox(label="Response")

    examples_inpainting = gr.Examples(
        label="Multimodal Understanding examples",
        examples=[
            [
                "explain this meme",
                "images/doge.png",
            ],
            [
                "Convert the formula into latex code.",
                "images/equation.png",
            ],
        ],
        inputs=[question_input, image_input],
    )
    
        
    gr.Markdown(value="# Text-to-Image Generation")

    
    
    with gr.Row():
        cfg_weight_input = gr.Slider(minimum=1, maximum=10, value=5, step=0.5, label="CFG Weight")
        t2i_temperature = gr.Slider(minimum=0, maximum=1, value=1.0, step=0.05, label="temperature")

    prompt_input = gr.Textbox(label="Prompt. (Prompt in more detail can help produce better images!)")
    seed_input = gr.Number(label="Seed (Optional)", precision=0, value=12345)

    generation_button = gr.Button("Generate Images")

    image_output = gr.Gallery(label="Generated Images", columns=2, rows=2, height=900)

    examples_t2i = gr.Examples(
        label="Text to image generation examples.",
        examples=[
            "Master shifu racoon wearing drip attire as a street gangster.",
            "The face of a beautiful girl",
            "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k",
            "A glass of red wine on a reflective surface.",
            "A cute and adorable baby fox with big brown eyes, autumn leaves in the background enchanting,immortal,fluffy, shiny mane,Petals,fairyism,unreal engine 5 and Octane Render,highly detailed, photorealistic, cinematic, natural colors.",
            "The image features an intricately designed eye set against a circular backdrop adorned with ornate swirl patterns that evoke both realism and surrealism. At the center of attention is a strikingly vivid blue iris surrounded by delicate veins radiating outward from the pupil to create depth and intensity. The eyelashes are long and dark, casting subtle shadows on the skin around them which appears smooth yet slightly textured as if aged or weathered over time.\n\nAbove the eye, there's a stone-like structure resembling part of classical architecture, adding layers of mystery and timeless elegance to the composition. This architectural element contrasts sharply but harmoniously with the organic curves surrounding it. Below the eye lies another decorative motif reminiscent of baroque artistry, further enhancing the overall sense of eternity encapsulated within each meticulously crafted detail. \n\nOverall, the atmosphere exudes a mysterious aura intertwined seamlessly with elements suggesting timelessness, achieved through the juxtaposition of realistic textures and surreal artistic flourishes. Each component\u2014from the intricate designs framing the eye to the ancient-looking stone piece above\u2014contributes uniquely towards creating a visually captivating tableau imbued with enigmatic allure.",
        ],
        inputs=prompt_input,
    )
    
    understanding_button.click(
        multimodal_understanding,
        inputs=[image_input, question_input, und_seed_input, top_p, temperature],
        outputs=understanding_output
    )
    
    generation_button.click(
        fn=generate_image,
        inputs=[prompt_input, seed_input, cfg_weight_input, t2i_temperature],
        outputs=image_output
    )

demo.launch(share=True)
# demo.queue(concurrency_count=1, max_size=10).launch(server_name="0.0.0.0", server_port=37906, root_path="/path")

Coagents

微信:adoresever

qa的差异部分

# 修正 pyproject.toml 文件(如果需要) # 删除 [project] 部分,确保正确的配置

[tool.poetry]
name = "my_agent"
version = "0.1.0"
description = "Starter"
authors = ["Ariel Weinberger <weinberger.ariel@gmail.com>"]
license = "MIT"
packages = [
    { include = "my_agent" }
]

[tool.poetry.dependencies]
python = "^3.12"
langchain-openai = "^0.2.1"
langchain-anthropic = "^0.2.1"
langchain = "^0.3.1"
openai = "^1.51.0"
langchain-community = "^0.3.1"
copilotkit = "0.1.33"
uvicorn = "^0.31.0"
python-dotenv = "^1.0.1"
langchain-core = "^0.3.25"
langgraph-cli = {extras = ["inmem"], version = "^0.1.64"}

[tool.poetry.scripts]
demo = "my_agent.demo:main"

[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"

然后运行:

poetry lock

poetry install

其他一样!

coagents-research-canvas安装过程

源码克隆

git clone https://github.com/CopilotKit/CopilotKit.git

cd 该目录

虚拟环境

conda create -n coagents-re python=3.12

conda activate coagents-re

cd agent

poetry install

.env添加OPENAI_API_KEY= TAVILY_API_KEY=

api获取:https://tavily.com/ https://platform.openai.com/api-keys

poetry run demo运行

cd ui

pnpm add @copilotkit/react-ui@1.4.8-coagents-v0-3.1 @copilotkit/react-core@1.4.8-coagents-v0-3.1

pnpm i

.env

pnpm run dev运行

若端口报错视频里有解决方法!

Smolagents视频代码

微信:adoresever

gemini进行text2SQL的查询

from smolagents import CodeAgent
from smolagents import tool, LiteLLMModel
from sqlalchemy import create_engine, MetaData, Table, Column, String, Integer, Float, insert, text

engine = create_engine("sqlite:///:memory:")
metadata = MetaData()

products = Table(
    "products",
    metadata,
    Column("id", Integer, primary_key=True),
    Column("name", String(50)),
    Column("category", String(20)),
    Column("price", Float),
    Column("stock", Integer)
)

sales = Table(
    "sales",
    metadata,
    Column("id", Integer, primary_key=True),
    Column("product_id", Integer),
    Column("quantity", Integer),
    Column("sale_date", String(10))
)

metadata.create_all(engine)

# 示例数据
product_data = [
    {"id": 1, "name": "游戏本", "category": "电脑", "price": 6999.0, "stock": 100},
    {"id": 2, "name": "机械键盘", "category": "配件", "price": 299.0, "stock": 50},
    {"id": 3, "name": "游戏手柄", "category": "配件", "price": 199.0, "stock": 30},
    {"id": 4, "name": "办公本", "category": "电脑", "price": 4999.0, "stock": 80}
]

sales_data = [
    {"id": 1, "product_id": 1, "quantity": 2, "sale_date": "2024-01-01"},
    {"id": 2, "product_id": 2, "quantity": 5, "sale_date": "2024-01-02"},
    {"id": 3, "product_id": 1, "quantity": 1, "sale_date": "2024-01-03"},
    {"id": 4, "product_id": 4, "quantity": 3, "sale_date": "2024-01-03"}
]

with engine.begin() as conn:
    for item in product_data:
        conn.execute(insert(products).values(item))
    for item in sales_data:
        conn.execute(insert(sales).values(item))

@tool
def sql_engine(query: str) -> str:
    """执行SQL查询。

    Args:
        query: SQL查询语句

    Returns:
        str: 查询结果
    """
    try:
        with engine.connect() as conn:
            result = conn.execute(text(query))
            columns = result.keys()
            rows = result.fetchall()
            
            if not rows:
                return "查询没有返回任何结果"

            output = []
            output.append(" | ".join(str(col) for col in columns))
            output.append("-" * (sum(len(str(col)) for col in columns) + 3 * (len(columns) - 1)))
            
            for row in rows:
                output.append(" | ".join(str(val) for val in row))
                
            return "\n".join(output)
    except Exception as e:
        return f"SQL执行错误: {str(e)}"

model = LiteLLMModel(model_id="gemini/gemini-2.0-flash-exp")
#ollama/qwen2.5:14b
agent = CodeAgent(
    tools=[sql_engine],
    model=model,
    verbose=True
)

test_query = "请查找库存量最多的三种商品"
print("执行查询:", test_query)
result = agent.run(test_query)
print("查询结果:")
print(result)

ollama模型调用duckduckgo进行网络查询

from smolagents import CodeAgent, DuckDuckGoSearchTool  
from smolagents import tool, TransformersModel, LiteLLMModel
from typing import Optional

model = LiteLLMModel(
    model_id="ollama/qwen2.5:14b",  # 使用 Ollama 格式的模型 ID
    api_base="http://localhost:11434"  # Ollama 的本地地址
)

# 创建 Agent 实例
agent = CodeAgent(
    tools=[DuckDuckGoSearchTool()],
    model=model,
    verbose=True  
)

# 运行查询
print(agent.run("中国第六代战机"))