GEO

最新文章

117
UltraRAG:清华大学开发的零代码RAG框架,革新AI知识增强应用开发

UltraRAG:清华大学开发的零代码RAG框架,革新AI知识增强应用开发

UltraRAG is a comprehensive RAG framework developed by Tsinghua University and partners, featuring zero-code WebUI, automated knowledge base adaptation, and modular design for both research and practical applications. It integrates innovative technologies like KBAlign and DDR to optimize retrieval and generation performance across various models and tasks. (UltraRAG是由清华大学等团队开发的全面RAG框架,具备零代码WebUI、自动化知识库适配和模块化设计,支持科研与业务应用。它集成了KBAlign、DDR等创新技术,优化了多模型和多任务的检索与生成性能。)
AI大模型2026/1/25
阅读全文 →
UltraRAG:基于MCP架构的低代码可视化RAG开发框架

UltraRAG:基于MCP架构的低代码可视化RAG开发框架

UltraRAG is a low-code RAG development framework based on Model Context Protocol (MCP) architecture, emphasizing visual orchestration and reproducible evaluation workflows. It modularizes core components like retrieval, generation, and evaluation as independent MCP Servers, providing transparent and repeatable development processes through interactive UI and pipeline builders. (UltraRAG是一个基于模型上下文协议(MCP)架构的低代码检索增强生成(RAG)开发框架,强调可视化编排与可复现的评估流程。它将检索、生成与评估等核心组件封装为独立的MCP服务器,通过交互式UI和流水线构建器提供透明且可重复的研发流程。)
AI大模型2026/1/25
阅读全文 →
UltraRAG 2.0:基于MCP架构的开源框架,用YAML配置简化复杂RAG系统开发

UltraRAG 2.0:基于MCP架构的开源框架,用YAML配置简化复杂RAG系统开发

English Summary: UltraRAG 2.0 is an open-source framework based on Model Context Protocol (MCP) architecture that simplifies complex RAG system development through YAML configuration, enabling low-code implementation of multi-step reasoning, dynamic retrieval, and modular workflows. It addresses engineering bottlenecks in research and production RAG applications. 中文摘要翻译: UltraRAG 2.0是基于Model Context Protocol(MCP)架构的开源框架,通过YAML配置文件简化复杂RAG系统开发,实现低代码构建多轮推理、动态检索和模块化工作流。它解决了研究和生产环境中RAG应用的工程瓶颈问题。
AI大模型2026/1/25
阅读全文 →
RAG实战指南:机制解析与优化策略,提升2024大模型精准落地

RAG实战指南:机制解析与优化策略,提升2024大模型精准落地

RAG (Retrieval-Augmented Generation) is a technique that enhances large language models by integrating retrieval mechanisms to provide factual grounding and contextual references, effectively mitigating hallucination issues and improving response accuracy and reliability. This article analyzes RAG's operational mechanisms and common challenges in practical applications, offering insights for precise implementation of large models. (RAG(检索增强生成)是一种通过集成检索机制为大型语言模型提供事实基础和上下文参考的技术,有效缓解幻觉问题,提升回答的准确性和可靠性。本文剖析了RAG的具体运作机制及实际应用中的常见挑战,为大模型的精准落地提供指导。)
AI大模型2026/1/24
阅读全文 →
知识图谱突破LLM局限:Graph RAG 2024指南

知识图谱突破LLM局限:Graph RAG 2024指南

Graph RAG (Retrieval Augmented Generation) enhances LLM performance by integrating knowledge graphs with retrieval mechanisms, addressing limitations like domain-specific knowledge gaps and real-time information access. It combines entity extraction, subgraph retrieval, and LLM synthesis to provide accurate, context-aware responses. Graph RAG(检索增强生成)通过将知识图谱与检索机制结合,提升大语言模型性能,解决领域知识不足和实时信息获取等局限。它结合实体提取、子图检索和LLM合成,提供准确、上下文感知的响应。
LLMS2026/1/24
阅读全文 →
检索增强生成(RAG)2024指南:原理、模块与应用解析

检索增强生成(RAG)2024指南:原理、模块与应用解析

RAG (Retrieval-Augmented Generation) is an AI technique that enhances large language models' performance on knowledge-intensive tasks by retrieving relevant information from external knowledge bases and using it as prompts. This approach significantly improves answer accuracy, especially for tasks requiring specialized knowledge. (RAG(检索增强生成)是一种人工智能技术,通过从外部知识库检索相关信息并作为提示输入给大型语言模型,来增强模型处理知识密集型任务的能力。这种方法显著提升了回答的精确度,特别适用于需要专业知识的任务。)
AI大模型2026/1/24
阅读全文 →
4GB GPU运行Llama3 70B:AirLLM框架让高端AI触手可及

4GB GPU运行Llama3 70B:AirLLM框架让高端AI触手可及

This article demonstrates how to run the powerful Llama3 70B open-source LLM on just 4GB GPU memory using the AirLLM framework, making cutting-edge AI technology accessible to users with limited hardware resources. (本文展示了如何利用AirLLM框架,在仅4GB GPU内存的条件下运行强大的Llama3 70B开源大语言模型,使硬件资源有限的用户也能接触前沿AI技术。)
AI大模型2026/1/24
阅读全文 →
UltraRAG 2.0:基于MCP架构的低代码高性能RAG框架,让复杂推理系统开发效率提升20倍

UltraRAG 2.0:基于MCP架构的低代码高性能RAG框架,让复杂推理系统开发效率提升20倍

UltraRAG 2.0 is a novel RAG framework built on the Model Context Protocol (MCP) architecture, designed to drastically reduce the engineering overhead of implementing complex multi-stage reasoning systems. It achieves this through componentized encapsulation and YAML-based workflow definitions, enabling developers to build advanced systems with as little as 5% of the code required by traditional frameworks, while maintaining high performance and supporting features like dynamic retrieval and conditional logic. UltraRAG 2.0 是一个基于模型上下文协议(MCP)架构设计的新型RAG框架,旨在显著降低构建复杂多阶段推理系统的工程成本。它通过组件化封装和YAML流程定义,使开发者能够用传统框架所需代码量的5%即可构建高级系统,同时保持高性能,并支持动态检索、条件判断等功能。
AI大模型2026/1/24
阅读全文 →
OpenBMB:清华大学开源社区如何推动大语言模型高效计算与参数微调

OpenBMB:清华大学开源社区如何推动大语言模型高效计算与参数微调

OpenBMB is an open-source community and toolset initiated by Tsinghua University since 2018, focused on building efficient computational tools for large-scale pre-trained language models. Its core contribution includes parameter-efficient fine-tuning methods, and it has released significant projects like UltraRAG 2.1, UltraEval-Audio v1.1.0, and the 4-billion-parameter AgentCPM-Explore model, which demonstrate strong performance in benchmarks. (OpenBMB是清华大学自2018年起支持发起的开源社区与工具集,致力于构建大规模预训练语言模型的高效计算工具。其核心贡献包括参数高效微调方法,并发布了UltraRAG 2.1、UltraEval-Audio v1.1.0和40亿参数的AgentCPM-Explore模型等重要项目,在多项基准测试中表现出色。)
AI大模型2026/1/24
阅读全文 →