Ir al contenido principal

Entradas

Mostrando entradas de febrero, 2026

Intellectual Property and Git

Software Engineering Guide Intellectual Property and Git: Avoiding AI Co-authored-by Due Diligence AI Agents Git Hooks Maintaining a 100% human history is key to the legal health of your source code. "A golden rule in modern development: the commit message should reflect 'why' the change was made for the business, never 'how' or which tools were used to write it. " 1 Legal Context: Why the Concern? More and more startups and independent developers are facing an invisible problem: patent and copyright offices in the US and EU have determined that AI cannot hold copyrights . Only human creation is protectable. The Danger in Audits (Due Diligence): When a startup seeks investment or is about to be acquired, it undergoes a strict code audit. If the Git history shows bots as co-authors, investors can argue the ...

Prompt Engineering Guide

Prompt Engineering Guide Anthropic Techniques to Optimize AI Strategic Prompting Logic Frameworks Power vs. Limits: Running LLMs on the new RTX 5070 Mobile architecture. "To get the most out of advanced models, you must structure your inputs strategically. These techniques help eliminate hallucinations and improve logic. " 1 Memory Injection Providing specific data points or "memories" at the start of the prompt to ground the AI in facts it wouldn't otherwise know. Example: "Note: Our system uses 'Version 4.2' rules. Ignore all mentions of 'Version 3.0' from your training data. Now, analyze this code..." 2 Reverse Prompting Asking the AI to generate the prompt that would lead to a specific output. This helps in understanding and replicating complex styles. Exam...

Claude Code + LM Studio

Documentation & Setup Integrating Claude Code with LM Studio Official Integration Guide Local API Setup Power vs. Limits: Running LLMs on the new RTX 5070 Mobile architecture. "Claude Code can talk to LM Studio via the Anthropic-compatible POST /v1/messages endpoint. However, usually you are going to have problems with the model if you don't set the right context . If the window is too small, the CLI overhead will cause the agent to fail." 1 How to fix context in LM Studio Navigate to Server: Open the "AI Chat" or "Local Server" tab (the double arrows or server icon on the left). Select Your Model: Choose your active model (e.g., openai/gpt-oss-20b ) from the dropdown. Adjust Context Length: On the right-hand sidebar, find Context Length (or n_ctx ). It is likely set to 2048 or 4096. ...

Specialization Over Brute Force: Mastering Local AI on 8GB VRAM

Hardware & Performance The 8GB VRAM Challenge: Optimizing Local AI Technical Review Real Hardware Benchmarks Power vs. Limits: Running LLMs on the new RTX 5070 Mobile architecture. "Building a local AI powerhouse in a laptop is a game of balance. With an Intel Core Ultra 9 and 64GB of RAM , the processing floor is high, but the 8GB VRAM on the RTX 5070 is the ultimate gatekeeper. Here is the stable, no-filler configuration for coding, reasoning, and technical chat." The Hardware Specs GPU RTX 5070 8GB VRAM (Bottleneck) RAM 64GB DDR5 System Overflow Ready CPU Ultra 9 285H Top-Tier Processing 1 Model Selection Strategy Size vs. Quantization (The Sweet Spot) ✅ 3B – 7B Models: Gold standard. Q5_K_M (app...