Crossplane's Graduation Announcement (5 minute read)
Crossplane, a cloud native platform engineering tool that brings declarative APIs to cloud infrastructure, applications, and services, has graduated from the Cloud Native Computing Foundation. With over 3,000 contributors from 450+ organizations, Crossplane has been adopted by companies like Nike, NASA Science Cloud, and IBM to build secure and scalable internal platforms.
|
|
Build better software to build software better (12 minute read)
By combining Bazel with classic software engineering principles, the Quip and Slack Canvas backend build pipeline was sped up from 60 minutes to as little as 25 minutes when the frontend was cached, and up to six times faster overall. Key to the improvement was severing dependencies between the frontend and backend, rewriting Python build orchestration code in Starlark, and composing granular units of work. Analysis of the build graph also revealed flaws, such as intertwined backend and build code, which were addressed through separation of concerns and rewriting build code, leading to a higher cache hit rate.
|
Code research projects with async coding agents like Claude Code and Codex (8 minute read)
A new workflow called βcode researchβ uses asynchronous coding agents such as Claude Code, Codex, and Gemini Jules to autonomously run experiments that answer programming questions through executable proof-of-concepts. By assigning these agents dedicated GitHub repositories with full network access, developers can launch multiple research tasks daily that independently return verifiable results with minimal manual involvement.
|
Let's Build an MCP Server (4 minute read)
This tutorial explains how to build a functional MCP (Machine Communication Protocol) Server with surprisingly little code, using Semaphore's public API to query build data and manage CI/CD workflows. It walks through how to create a Python project, install necessary dependencies like "mcp[cli]" and "httpx," define a "list_projects" tool to retrieve project names, test the server using the MCP Inspector, and connect to Codex. Semaphore is developing an official MCP server, which will soon be generally available, offering a read-only version initially to interested organizations.
|
|
Deepnote (GitHub Repo)
Deepnote, used by over 500,000 data professionals, is a successor to Jupyter that adds an AI agent, sleek UI, new block types, and native data integrations. The Deepnote open-source platform allows users to edit and run notebooks in AI-native code editors and offers free cloud access to students and educators.
|
TOON (GitHub Repo)
Token-Oriented Object Notation (TOON) is a compact data serialization format for LLM prompts that potentially halves token usage compared to JSON. It uses YAML's indentation and CSV's tabular format with optimizations to reduce token cost when passing structured data to Large Language Models. Benchmarks show that TOON achieves 70.1% accuracy (vs JSON's 65.4%) while using 46.3% fewer tokens.
|
|
Handling sensitive log data using Amazon CloudWatch (7 minute read)
Amazon CloudWatch and AWS Identity and Access Management enable secure handling of sensitive log data through automated detection, masking, and access control mechanisms. These capabilities help protect personally identifiable information while maintaining operational efficiency and minimizing mean time to respond during application troubleshooting.
|
How agentic AI is changing cloud security (5 minute read)
Agentic AI transforms cloud security by evolving from passive copilots to proactive teammates capable of reasoning, learning, and acting autonomously. Sysdig's approach uses autonomous agents that analyze environments, assess business risk, and take action to strengthen defenses, marking a shift toward smarter, faster, and more adaptive cloud protection.
|
Accelerating LLM inference with speculative decoding: Lessons from LinkedIn's Hiring Assistant (5 minute read)
LinkedIn's Hiring Assistant, the company's first AI agent for recruiters, now uses n-gram speculative decoding to address latency challenges and improve the responsiveness of the user experience. The technique accelerates text generation without sacrificing quality and resulted in a nearly 4x higher throughput at the same queries per second, as well as a 66% reduction in P90 end-to-end latency. N-gram speculative decoding shines in workloads where outputs naturally repeat phrases or follow structured patterns.
|
|
|
Love TLDR? Tell your friends and get rewards!
|
|
Share your referral link below with friends to get free TLDR swag!
|
|
|
|
Track your referrals here.
|
|
Want to advertise in TLDR? π°
If your company is interested in reaching an audience of devops professionals and decision makers, you may want to advertise with us.
Want to work at TLDR? πΌ
Apply here or send a friend's resume to jobs@tldr.tech and get $1k if we hire them!
If you have any comments or feedback, just respond to this email!
Thanks for reading,
Kunal Desai & Martin Hauskrecht
|
|
|
|