Find beginner-friendly open source issues to contribute to
Want to see how your project or issue page will look in Google search?
Try our SERP Preview Generator to optimize your title and meta description for better visibility!
Programming Language
Sort By
Quick Labels
Search Labels
Showing 1 - 20 of 372663 issues
1,403
## 🧠 Add New Trivia Question **Category:** Community Contribution - Trivia **Difficulty:** Easy (good first issue!) **Estimated Time:** <1 min --- ### 🎯 Your Task Add this trivia question to our growing quiz bank! ### The Trivia Question **Question:** Which Japanese dish is made of raw fish over rice? **Answers:** 1. Sushi 2. Tempura 3. Tonkatsu 4. Okonomiyaki **Correct Answer Index:** 0 ### 📝 Instructions 1. Open [`data/community-content/data/community-content/japan-trivia-easy.json`](../blob/main/data/community-content/data/community-content/japan-trivia-easy.json) 2. Add this trivia object to the end of the array (before the closing `]`) 3. Make sure to add a comma after the previous last item ```json { "question": "Which Japanese dish is made of raw fish over rice?", "difficulty": "easy", "answers": [ "Sushi", "Tempura", "Tonkatsu", "Okonomiyaki" ], "correctIndex": 0 } ``` 4. Save the file and commit the changes 5. Submit a Pull Request with title: `content: add new trivia question` 6. Link this issue using `Closes #<issue_number>` 7. Star our repo ⭐, drink some delicious bubble tea 🍹 and wait for review! --- **Questions?** Comment below and we'll help! 🙌
Created: 2/4/2026
1,403
## 🎌 Add Japanese Proverb (ことわざ) **Category:** Community Contribution - Proverb **Difficulty:** Easy (good first issue!) **Estimated Time:** <1 min --- ### 🎯 Your Task Add this traditional Japanese proverb to help learners understand Japanese wisdom! ### The Proverb | Japanese | Reading | English | |----------|---------|---------| | **後悔先に立たず** | Koukai saki ni tatazu | Regret does not stand before | > 💡 **Meaning:** It's too late for regrets ### 📝 Instructions 1. Open [`data/community-content/japanese-proverbs.json`](../blob/main/data/community-content/japanese-proverbs.json) 2. Add this proverb object to the end of the array (before the closing `]`) 3. Make sure to add a comma after the previous last item ```json { "japanese": "後悔先に立たず", "romaji": "Koukai saki ni tatazu", "english": "Regret does not stand before", "meaning": "It's too late for regrets" } ``` 4. Save the file and commit the changes 5. Submit a Pull Request with title: `content: add new japanese proverb` 6. Link this issue using `Closes #<issue_number>` 7. Star our repo ⭐, drink some delicious bubble tea 🍹 and wait for review! --- **Questions?** Comment below and we'll help! 🙌
Created: 2/4/2026
## Description The `package.json.hbs` template file for `turbo gen` is not linked/pinned to local packages. This is mostly an inconvenience, and not an actual bug. We would prefer to have the version in this templating/handlebars file to be updated when new versions are published/released.
Created: 2/4/2026
## Description When the template is used to bootstrap a new project, the first thing consumers (generally) do is updating/bumping dependencies (using `pnpm up -r --latest`). After this is done, the code needs to be updated accordingly. To find issues with the implementation of new major version, checks are ran. This initial check (typechecking) fails with `Cannot find module @org/pkg`, as the local modules haven't been built yet. Aside from this initial issue, consumer project typechecking should always use the latest, relevant build for local packages - so a new build should be ran for local packages before typechecking.
Created: 2/4/2026
76,014
### Description We current log at `INFO` level whenever an index is skipped due to the `index.lifecycle.skip` setting. We should log at `DEBUG`, because this can get quite chatty with the force-merge-clone thing.
Created: 2/4/2026 • 1 comments
1,403
## 🎋 Add New Japan Fact **Category:** Community Contribution - Fun Fact **Difficulty:** Easy (good first issue!) **Estimated Time:** <1 min --- ### 🎯 Your Task Add this interesting fact about Japan to our collection! ### The Fact > Japan has more Michelin-starred restaurants than any other country - Tokyo alone has more stars than Paris. ### 📝 Instructions 1. Open [`data/community-content/japan-facts.json`](../blob/main/data/community-content/japan-facts.json) 2. Add this fact to the end of the array (before the closing `]`) 3. Make sure to add a comma after the previous last item 4. Save the file and commit the changes 5. Submit a Pull Request with title: `content: add new japan fact` 6. Link this issue using `Closes #<issue_number>` 7. Star our repo ⭐, drink some delicious bubble tea 🍹 and wait for review! --- **Questions?** Comment below and we'll help! 🙌
Created: 2/4/2026
https://entra.microsoft.com/f6a9f81f-e454-46a9-8575-1a9f096917ef
Created: 2/4/2026
Shouldn't it connect and subscribe only at the beginning and not every second? The output is currently like this: INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:mqtt_listener:Connected to MQTT Broker! INFO:mqtt_listener:Subscribed to itb/surya/water/# INFO:ai.iot.mqtt_bridge:Stored reading for Coba_Surya at 2026-02-04 19:01:59+00:00
Created: 2/4/2026
https://github.com/github/haikus-for-codespaces
Created: 2/4/2026
### Problem The GET /users/:id endpoint currently sends a response even when the requested user does not exist. It should properly handle the "user not found" case. ### Current Behavior - Returns an empty or incorrect response - Does not use proper HTTP status code ### Expected Behavior - If user is not found, return: - Status code: 404 - JSON message explaining the error - Ensure only one response is sent ### File to Update controllers/user.controller.js
Created: 2/4/2026
### problem The POST api allows currently creating users with empty name or email ### Expected behavior - if name or email is empty return 400 error - send meaningful error message ### file to check \controllers\user.controller.js
Created: 2/4/2026 • 3 comments
https://rssfeed.azure.status.microsoft/en-us/status/feed/
Created: 2/4/2026
Currently, the modulator only produces Bit Error Rate Test (BERT) frames. We need to be able to produce BERT frames at the modem level, but we also need to be able to connect to Interlocutor for Opulent Voice frames (voice, command, text, data).
Created: 2/4/2026 • 2 comments
## Feature Description Add support for non-streaming (synchronous) responses when using the memory-enriched chat endpoint. Currently, the `/api/conversations/{cid}/messages/stream` endpoint only supports Server-Sent Events (SSE) streaming. A companion `/api/conversations/{cid}/messages` endpoint should return a complete JSON response with the same memory integration features. ## Motivation Streaming responses are excellent for real-time UI feedback, but they introduce complexity for several common use cases: 1. **API Integrations**: External services calling the API may not support or need SSE streaming 2. **Batch Processing**: When processing multiple conversations or bulk operations, waiting for complete responses simplifies orchestration 3. **Testing**: Non-streaming endpoints are significantly easier to test and validate 4. **Simpler Clients**: Mobile apps, CLIs, and simple HTTP clients may prefer standard request/response patterns 5. **Serverless/Edge**: Some serverless environments have limitations with long-running streaming connections The `LLMService` already supports both modes via `chat_completion()` and `chat_completion_stream()`, but the memory-integrated conversation endpoint only exposes streaming. ## Proposed Solution Add a non-streaming POST endpoint alongside the existing streaming endpoint: ```python @app.post("/api/conversations/{cid}/messages") async def send_message( cid: str, body: MessageRequest, user: dict = Depends(get_current_user), llm_service: LLMService = Depends(get_llm_service_dependency), memory_service: BaseMemoryService = Depends(get_memory_service_dependency), ) -> JSONResponse: """ Send a message with memory-enriched AI response (non-streaming). Returns complete JSON response: { "message_id": "...", "response": "Complete AI response text", "memories_used": 5, "reasoning": "Optional thinking/reasoning content", "metadata": {...} } """ # 1. Search relevant memories memories = await memory_service.search(body.message, user_id=user["user_id"]) # 2. Build context-enriched prompt messages = build_memory_enriched_messages(body.message, memories) # 3. Get complete response (non-streaming) response_text = await llm_service.chat_completion(messages=messages) # 4. Save to memory if configured if body.save_to_memory: await memory_service.add(messages=[...], user_id=user["user_id"]) return JSONResponse({ "message_id": str(uuid4()), "response": response_text, "memories_used": len(memories), }) ``` ### Response Schema ```json { "message_id": "uuid-string", "response": "Complete AI response text", "memories_used": 5, "memories": [ { "id": "memory-id-1", "text": "Relevant memory snippet...", "score": 0.92 } ], "reasoning": "Optional: AI thinking/reasoning content if reasoning_effort was specified", "conversation_id": "cid", "created_at": "2024-01-15T10:30:00Z" } ``` ## Additional Context The current streaming endpoint in `examples/advanced/sso-multi-app/apps/sso-app-3/web.py` shows the full memory integration flow: - Memory search with `memory_service.search()` - Building enriched prompts with memory context - Streaming via `llm_service.chat_completion_stream()` - Saving responses back to memory The non-streaming version should maintain feature parity: - Same memory search and context building - Same response quality - Same memory persistence options - Optional reasoning/thinking content in response ### Current Architecture Reference ``` mdb_engine/ ├── llm/ │ └── service.py # LLMService with chat_completion() and chat_completion_stream() ├── memory/ │ ├── base.py # BaseMemoryService interface │ ├── cognitive.py # CognitiveMemoryService implementation │ ├── orchestrator.py # CognitiveEngine integrating STM + LTM │ └── service.py # get_memory_service() factory ``` ## Implementation Notes ### 1. Code Reuse Extract the memory search and prompt building logic from the streaming endpoint into shared utility functions: ```python # mdb_engine/memory/utils.py (new file or add to existing) async def search_relevant_memories( memory_service: BaseMemoryService, query: str, user_id: str, limit: int = 5, ) -> list[dict]: """Search for memories relevant to the query.""" return await memory_service.search(query, user_id=user_id, limit=limit) def build_memory_enriched_messages( user_message: str, memories: list[dict], system_prompt: str | None = None, conversation_history: list[dict] | None = None, ) -> list[dict]: """Build LLM messages with memory context.""" messages = [] # System prompt with memory context memory_context = "\n".join([m.get("memory", m.get("text", "")) for m in memories]) system = system_prompt or "You are a helpful assistant." if memory_context: system += f"\n\nRelevant context from memory:\n{memory_context}" messages.append({"role": "system", "content": system}) # Add conversation history if provided if conversation_history: messages.extend(conversation_history) # Add current user message messages.append({"role": "user", "content": user_message}) return messages ``` ### 2. Response Format Use a structured JSON response that includes: | Field | Type | Description | |-------|------|-------------| | `response` | string | The complete AI response text | | `message_id` | string | Unique identifier for this message | | `memories_used` | integer | Count of memories included in context | | `memories` | array | Optional: list of memory objects used (for debugging/transparency) | | `reasoning` | string | Optional: thinking content if `reasoning_effort` was specified | | `conversation_id` | string | The conversation this message belongs to | | `created_at` | string | ISO 8601 timestamp | ### 3. Error Handling Return proper HTTP status codes and error messages: ```python # Success 200 OK - Response generated successfully # Client Errors 400 Bad Request - Invalid request body 401 Unauthorized - Authentication required 403 Forbidden - User doesn't have access to conversation 404 Not Found - Conversation not found # Server Errors 500 Internal Server Error - LLM or memory service failure 503 Service Unavailable - Temporary service issue ``` ### 4. Configuration Support the same parameters as the streaming endpoint: ```python class MessageRequest(BaseModel): message: str save_to_memory: bool = True reasoning_effort: str | None = None # "none", "low", "medium", "high" include_memories_in_response: bool = False # Include memory details in response memory_limit: int = 5 # Max memories to include in context ``` ### 5. No Changes Required - **Memory service**: `BaseMemoryService` and `CognitiveMemoryService` already support the required operations - **LLM service**: `LLMService.chat_completion()` already supports non-streaming ### 6. Testing Considerations Non-streaming endpoints are easier to test: ```python # tests/integration/test_memory_chat_endpoint.py async def test_send_message_with_memory(): """Test non-streaming message endpoint with memory integration.""" # Create conversation response = await client.post("/api/conversations", json={"title": "Test"}) cid = response.json()["id"] # Add some memories first await memory_service.add("User prefers Python", user_id=user_id) # Send message (non-streaming) response = await client.post( f"/api/conversations/{cid}/messages", json={"message": "What programming language should I use?"} ) assert response.status_code == 200 data = response.json() assert "response" in data assert data["memories_used"] >= 1 assert "Python" in data["response"] # Memory was used ```
Created: 2/4/2026
https://rssfeed.azure.status.microsoft/en-us/status/feed/
Created: 2/4/2026
Complete and test coherent demodulation. This improves performance by 3dB and will then match the performance baseline of the HDL implementation.
Created: 2/4/2026
## Summary PSScriptAnalyzer reports a compatibility error for ternary expression syntax in PowerShell linting scripts. ## Error Details ``` ❌ PSUseCompatibleSyntax: The ternary expression syntax '<test> ? <exp1> : <exp2>' is not available by default in PowerShell versions 3,4,5,6 ``` ## Affected Files One or more scripts in `scripts/linting/` use PowerShell 7+ ternary syntax that is incompatible with earlier PowerShell versions. ## Acceptance Criteria - [ ] Replace ternary expressions with compatible `if/else` statements or `switch` expressions - [ ] `npm run lint:ps` passes with 0 errors - [ ] Maintain script functionality ## Additional Context This issue was identified during CI validation for PR #119. The error is pre-existing and unrelated to the contributing documentation changes.
Created: 2/4/2026
## 问题描述 `apps/backend/lib/mcp/manager.ts`(MCPServiceManager)中直接使用 `console.*` 方法进行日志记录,而项目已有完善的 Logger 系统(`apps/backend/Logger.ts`)。 同一目录下的其他文件(如 `cache.ts`、`custom.ts`)都正确使用了统一的 Logger: ```typescript // ✅ 正确使用(cache.ts, custom.ts) import { logger } from "@/Logger.js"; logger.info("..."); logger.error("...", error); ``` 但 `manager.ts` 却直接使用 console: ```typescript // ❌ manager.ts 中的做法 console.info("..."); console.error("...", error); ``` ## 问题位置 `apps/backend/lib/mcp/manager.ts` - 全文件 共 **93 处**直接使用 `console.*` 方法: - 第 132, 141, 144, 156, 167, 169, 184 行... - 涉及 `console.debug`, `console.info`, `console.warn`, `console.error` ## 严重程度 Medium ## 影响范围 1. **代码规范不一致**:违反项目统一的日志记录规范 2. **功能缺失**:无法利用 Logger 提供的高级功能: - 结构化日志 - 日志级别动态控制 - 守护进程模式适配 - 文件日志轮转 - 异步高性能写入 3. **维护困难**:日志输出格式与其他模块不一致 ## 修复方案 ### 步骤 1: 导入 Logger 在 `apps/backend/lib/mcp/manager.ts` 文件顶部添加导入: ```typescript import { logger } from "@/Logger.js"; ``` ### 步骤 2: 替换所有 console 调用 将所有 `console.*` 替换为对应的 `logger.*`: | 原调用 | 替换为 | |--------|--------| | `console.debug(...)` | `logger.debug(...)` | | `console.info(...)` | `logger.info(...)` | | `console.warn(...)` | `logger.warn(...)` | | `console.error(...)` | `logger.error(...)` | ### 示例 **修复前**: ```typescript console.debug(`服务 ${data.serviceName} 连接成功,开始刷新工具缓存`); console.info(`服务 ${data.serviceName} 工具缓存刷新完成`); console.error(`刷新服务 ${data.serviceName} 工具缓存失败:`, error); ``` **修复后**: ```typescript logger.debug(`服务 ${data.serviceName} 连接成功,开始刷新工具缓存`); logger.info(`服务 ${data.serviceName} 工具缓存刷新完成`); logger.error(`刷新服务 ${data.serviceName} 工具缓存失败:`, error); ``` ### 注意事项 1. Logger 的 API 与 console 类似,替换时保持相同的参数格式 2. 错误对象作为第二个参数传递给 `logger.error()` 3. 替换后运行测试确保功能正常:`pnpm test` ## 相关代码 ```typescript // manager.ts:132 附近 console.debug(`服务 ${data.serviceName} 连接成功,开始刷新工具缓存`); // 应改为: logger.debug(`服务 ${data.serviceName} 连接成功,开始刷新工具缓存`); // manager.ts:144 附近 console.error(`刷新服务 ${data.serviceName} 工具缓存失败:`, error); // 应改为: logger.error(`刷新服务 ${data.serviceName} 工具缓存失败:`, error); ``` ## 参考实现 同目录下的 `cache.ts` 和 `custom.ts` 已正确使用 Logger,可作为参考。
Created: 2/4/2026 • 2 comments
Browse beginner-friendly issues across thousands of open source projects.
Make meaningful contributions to projects that interest you.
Improve your coding skills by working on real-world problems.
Showcase your contributions and build your professional portfolio.