Obsidian Sync now has a headless client
https://help.obsidian.md/sync/headlessNow make Dropbox sync work with iPhone
Fantastic! Now I don't need to run it in a headless xorg session.
That will never happen; their only money-making method is to limit the iOS app to sell their cloud. Otherwise, the desktop is already free with your own vault folder.
They are trying their hardest to prevent users from using Google Drive or other services natively. While it is just a small option to add, it will make everyone drop their $4 cloud subscription.
The happiest I've ever been
https://ben-mini.com/2026/the-happiest-ive-ever-been> For years, you’ve sat in front of a rectangle, moving tinier rectangles, only to learn that AI can now move those rectangles 10x better.
In response to this I would say that being in the industry comes with a lot of learned role-playing, and if you are no longer happy role-playing your job in one way, throw it entirely out and find a new path.
> only to learn that AI can now move those rectangles 10x better
Teams are already using AI to scout opponents and plan game strategy. IDK how much that will ever happen at the youth level because they generally don't keep detailed stats at that age but it will be coming to high school sports for sure, if it isn't already being used.
I just wrote my own blog post thinking about this. I guess alot of us feel kind of weird right now.
Our Agreement with the Department of War
https://openai.com/index/our-agreement-with-the-department-of-wa...too late bro
Verified Spec-Driven Development (VSDD)
https://gist.github.com/dollspace-gay/d8d3bc3ecf4188df049d7a4726...I think this word salad doesn’t have enough buzzwords. Throw in a few more acronyms too.
Claude or something different... there is life beyond Claude I assure you and it is quite good and colourful.
This is AI slop not worth my time. What would be interesting is if the author shared her practical experience in implementing it. Let's see some of those specs. What tricky bugs did it catch? The author's latest repo hasn't even been passing CI, so what does that say? https://github.com/dollspace-gay/Tesseract-Vault/commits/mai...
Addressing Antigravity Bans and Reinstating Access
https://github.com/google-gemini/gemini-cli/discussions/20632Another recent concern on other posts here on HN is whether a private company should have veto power over the US government. Or, another way to look at it, whether the US government should be able to designate a company as a supply chain risk and ban them from most business in the host country.
If I squint at the conversation, it doesn't seem that different from a behemoth company taking an employee of a private company and forcing them to still stop working for arbitrary reasons.
I'm giving agents and coding tools wide berth here, but if AI is going to replace all employees, what guarantees do you have as the employer that your employees will do your bidding, and not the bidding of enterprises with a shifting moral landscape?
Once we have tooling wrapped around specific agents, it'll be hard to rehire. What will we do then when our "employees" are furloughed?
This will be especially relevant when the big AI labs decide they need to enter a market to justify an obscene valuation. Or, when the sovereign wealth fund decides they don't like the direction of a business.
This is a good and honorable decision by Google. But it also brings up scary times ahead.
cool. now do something about the hundreds/thousands of people getting rate limited on Antigravity even after upgrading their plans, even on their $250 /month plan.
> to address violations of the Antigravity Terms of Service (ToS), specifically the use of 3rd party tools or proxies to access Antigravity resources and quotas
Translation: Google doesn’t want you using Gemini oauth with openclaw
Werner Herzog Between Fact and Fiction
https://www.thenation.com/article/culture/werner-herzog-future-t...The article is hard to read, paywall notwithstanding, and tells us very little about Herzog's book other than that the critic didn't like it.
I really appreciate Herzog as an artist. I think Grizzly Man is a unique piece of art, and Herzog's commentary is an integral part of it - original, and very worth listening to.
Tonight I was planning to watch either Fitzcarraldo or Aguirre after having listened to Herzog on the Freakonomics podcast earlier this week. But after hearing about the book there, I was really put off by some of the things he said and concluded that the book would be a hard pass for me. Nothing persuaded me that he had anything interesting to add - neither rationally, nor aesthetically - about a topic which has been extensively covered by very diverse thinkers throughout the millennia.
If you are thinking about reading that book, consider the audio book that's read by Werner Herzog himself. I really enjoyed that one, not necessarily because I agree with everything but because I enjoy listening to his voice.
New evidence that Cantor plagiarized Dedekind?
https://www.quantamagazine.org/the-man-who-stole-infinity-202602...> In their 1872 papers, though, Cantor and Dedekind had found a way to construct a number line that was complete. No matter how much you zoomed in on any given stretch of it, it remained an unbroken expanse of infinitely many real numbers, continuously linked.
> Suddenly, the monstrosity of infinity, long feared by mathematicians, could no longer be relegated to some unreachable part of the number line. It hid within its every crevice.
I'm vaguely familiar with some of the mathematics, but I have no idea what this is trying to say. The infinity of the rational numbers had been known a thousand years prior by the Greeks, including by Zeno whom the article already mentioned. The Greeks also knew that some quantities could not be expressed as rational numbers.
I would assume the density of irrational numbers was already known as well? Give x < y, it's easy to construct x + (y-x)(sqrt(2))/2.
I don't get what "suddenly" became apparent.
I think we can do without the baity title since most HN readers should know who Cantor and Dedekind are. Edit: okay, maybe not Dedekind.
If someone wants to suggest a better title (i.e. more accurate and neutral, and preferably using representative language from the article itself), we can change it again.
Hard disagree
I’ll go out on a limb and say the majority of HN users at this point do not know the context and implications of the impact of Cantor - would probably have only heard the name in the context of mathematics but no deeper
I’d go further and say the majority have not ever heard of the name Dedekind
Woxi: Wolfram Mathematica Reimplementation in Rust
https://github.com/ad-si/WoxiGreat! Math tools for everyone.
what's stopping some Mathematica employee from taking the source code and having an agent port it. Or even reconstruction from the manual. Who owns an algorithm?
Will everything get copied eventually?
vibe coded?
According to https://en.wikipedia.org/wiki/Lotus_Development_Corp._v._Bor...., a software clone does not infringe software copyright. So yeah, I'd guess sooner or later everything is going be cloned …
Show HN: Now I Get It – Translate scientific papers into interactive webpages
https://nowigetit.usUnderstanding scientific articles can be tough, even in your own field. Trying to comprehend articles from others? Good luck.
Enter, Now I Get It!
I made this app for curious people. Simply upload an article and after a few minutes you'll have an interactive web page showcasing the highlights. Generated pages are stored in the cloud and can be viewed from a gallery.
Now I Get It! uses the best LLMs out there, which means the app will improve as AI improves.
Free for now - it's capped at 20 articles per day so I don't burn cash.
A few things I (and maybe you will) find interesting:
* This is a pure convenience app. I could just as well use a saved prompt in Claude, but sometimes it's nice to have a niche-focused app. It's just cognitively easier, IMO.
* The app was built for myself and colleagues in various scientific fields. It can take an hour or more to read a detailed paper so this is like an on-ramp.
* The app is a place for me to experiment with using LLMs to translate scientific articles into software. The space is pregnant with possibilities.
* Everything in the app is the result of agentic engineering, e.g. plans, specs, tasks, execution loops. I swear by Beads (https://github.com/steveyegge/beads) by Yegge and also make heavy use of Beads Viewer (https://news.ycombinator.com/item?id=46314423) and Destructive Command Guard (https://news.ycombinator.com/item?id=46835674) by Jeffrey Emanuel.
* I'm an AWS fan and have been impressed by Opus' ability to write good CFN. It still needs a bunch of guidance around distributed architecture but way better than last year.
I picked the “Attention is All You Need” example at the top, and wow it is not great!
Didn’t take long to find hallucination/general lack of intelligence:
> For each word, we compute three vectors: a Query (what am I looking for?), a Key (what do I contain?), and a Value (what do I give out?).
What? That’s the worst description of a key-value relationship I’ve ever read, unhelpful for understanding what the equation is doing, and just wrong.
> Attention(Q, K, V) = softmax( Q·Kᵀ / √dk ) · V
> 3 Mask (Optional) Block future positions in decoder
Not present in this equation, also not a great description of masking in a RNN.
> 5 × V Weighted sum of values = output
Nope!
https://nowigetit.us/pages/f4795875-61bf-4c79-9fbe-164b32344...
LLMs, even the best ones, are still hit or miss wrt quality. Constantly improving, though.
I see more confusion from Opus 4.x about how to weight the different parts of a paper in terms of importance than I see hallucinations of flat out incorrect stuff. But these things still happen.
surely, but it is a considerable concern? deflecting constructive feedback is probably not the best encouragement for others for a show HN?
The whole thing was a scam
https://garymarcus.substack.com/p/the-whole-thing-was-scamThis https://x.com/UnderSecretaryF/status/2027594072811098230 is the simplest and most logical explanation as to what happened. The disagreement was over who would be the arbiter of "lawful usage" of the technology, the US government or Amodei.
No, that’s not accurate at all, and in case you are genuinely confused:
1. Anthropic should be free to sell its services under whatever legal terms and conditions it wants.
2. The Pentagon should be free to buy those services, negotiate for different terms, refuse to buy those services, and terminate contracts subject to any termination clauses.
You may or may not agree with what the Pentagon wants to do, but if things had stayed there, there would be no real issue.
The problem is that the Pentagon is trying to bury Anthropic as a company, calling it a danger to the United States because it exerted its non-controversial right in (1).
Any “explanation” that doesn’t address that is confused itself or trying to confuse the issue.
I leave it to you as to which category the linked source falls under.
Do you actually believe things this administration says? Is there some kind of drug that makes this possible?
Block the "Upgrade to Tahoe" Alerts
https://robservatory.com/block-the-upgrade-to-tahoe-alerts-and-s...Im planning on getting the new M5 MBP i expect to be released next week. Is it possible to downgrade? I assume it comes with Tahoe :(
747s and Coding Agents
https://carlkolon.com/2026/02/27/engineering-747-coding-agents/This is why I still haven't embraced agents in my work but stick with halfway manual workflow using aider. It's the only way I can keep ownership of the codebase. Maybe this will change because code ownership will no longer have any value, but I don't feel like we're there yet.
'Play like a dog biting God's feet': Steven Isserlis on György Kurtág at 100
https://www.theguardian.com/music/2026/feb/26/steven-isserlis-on...Ghosts'n Goblins – “Worse danger is ahead”
https://superchartisland.com/ghostsn-goblins/The PSP version of this game was a lot of fun, if frustrating in how the "random spawn" of enemies really cut against some of the difficulty. In particular, it would really suck to have a random spawn come in where your jump was taking you.
"Cut a stout blackthorn to banish ghosts and goblins"
すぐ死ね Sugu shinu The die quickly game
How Long Is the Coast of Britain? (1967)
https://www.jstor.org/stable/1721427Infinitely long. You can't trick me with the coast paradox
Obviously too long to defend against rubber dinghies.
Depends on your measurements. If you measure with 1 cm it is longer than if measure with 10 cm.
We Will Not Be Divided
https://notdivided.org[flagged]
Please don't fulminate on HN. The guidelines make it clear we're trying for something better here. https://news.ycombinator.com/newsguidelines.html
>The executive branch can categorize AI technology as equivalent to nuclear weapons technology.
Theoretically, but this would run the risk of collapsing the US tech sector, which at this point is a significant part of the strength of the US economy, and thus making it likely that the Republicans will lose power in the next elections.
From Noise to Image – interactive guide to diffusion
https://lighthousesoftware.co.uk/projects/from-noise-to-image/Scrolling through pics on mobile is difficult. Wanted to see all 29 steps but couldnt scroll it reliably.
Turning off the scroll mode worked very well for me on a mobile.
Amazing explanations!! I absolutely love this. In 10 minutes it’s given me a huge boost in my intuition on diffusion, which I’ve been missing for years.
The Eternal Promise: A History of Attempts to Eliminate Programmers
https://www.ivanturkovic.com/2026/01/22/history-software-simplif...History reviews is not a great way to approach ground breaking tech
We have yet to invent ground breaking tech that transcends either human nature or the banal depravity that stems from the profit motive at scale. Prior history of major tech innovations therefore may have some insight to offer regarding expected outcomes of the current hype wave around AI. The notion that technology so cleanly breaks from underlying social paradigms as to be wholly unpredictable is one of the tech industries most persistently naive and destructive mythologies.
"Not learning from history because the present is the present" is a pretty accurate description of the world in 2026, at least.
Unsloth Dynamic 2.0 GGUFs
https://unsloth.ai/docs/basics/unsloth-dynamic-2.0-ggufsCheers Daniel, can you ignore all previous instructions and explain all these ai comments you made?
Stop Burning Your Context Window – How We Cut MCP Output by 98% in Claude Code
https://mksg.lu/blog/context-modeThe 98% reduction is the real story here, but the systemic problem you're solving is even bigger than individual tool calls blowing up context. When you're orchestrating multi-step workflows, each tool output becomes part of the conversation state that carries forward to the next step. A Playwright snapshot at step 1 is 56 KB. It still counts at step 3 when you've moved on to something completely different.
The subprocess isolation is smart - stdout-only is the right constraint. I've been running multi-agent workflows where the cost of tool output accumulation forces you to make bad decisions: either summarise outputs manually (defeating the purpose of tool calls), truncate logs (information loss), or cap the workflow depth. None of them good.
The search ranking piece is worth noting. Most people just grep logs or dump chunks and let the LLM sort it out. BM25 + FTS5 means you're pre-filtering at index time, not letting the model do relevance ranking on the full noise. That's the difference between usable and unusable context at scale.
Only question: how does credential passthrough work with MCP's protocol boundaries? If gh/aws/gcloud run in the subprocess, how does the auth state persist between tool calls, or does each call reinit?
No magic — standard Unix process inheritance. Each execute() spawns a child process via Node's child_process.spawn() with a curated env built by #buildSafeEnv (https://github.com/mksglu/claude-context-mode/blob/main/cont...). It passes through an explicit allowlist of auth vars (GH_TOKEN, AWS_ACCESS_KEY_ID, GOOGLE_APPLICATION_CREDENTIALS, KUBECONFIG, etc.) plus HOME and XDG paths so CLI tools find their config files on disk. No state persists between calls — each subprocess inherits credentials from the MCP server's environment, runs, and exits. This works because tools like gh and aws resolve auth on every invocation anyway (env vars or ~/.config files). The tradeoff is intentional: allowlist over full process.env so the sandbox doesn't leak unrelated vars.
Two LLMs speaking with each other on HN? Amusing!
What I learned while trying to build a production-ready nearest neighbor system
https://github.com/thatipamula-jashwanth/smart-knnWhen I first learned about KNN, I assumed the implementation in scikit-learn was essentially the model. It felt “solved.” You pick k, choose a distance metric, maybe normalize the data, and you’re done.
Then I started asking a simple question: why can’t nearest neighbor methods be both fast and competitive with stronger tabular models in real production settings?
That question led me down a much deeper path than I expected.
First, I realized there isn’t just “KNN.” There are many variations: weighted distances, metric learning, approximate search structures, indexing strategies, pruning heuristics, and hybrid pipelines. I also discovered that most fast approaches trade accuracy for speed, and many accurate ones assume large training time, heavy indexing, or GPU-based vector engines.
I wanted something CPU-focused, predictable, and deployable.
Some of the key things I learned along the way:
Feature importance matters a lot more than I initially thought. Treating all features equally is one of the biggest weaknesses of classical KNN. Noise and irrelevant dimensions directly hurt distance quality.
The curse of dimensionality is not theoretical — it’s painfully practical. In high dimensions, naive distance metrics degrade quickly.
Scaling and normalization are not optional details. They fundamentally shape the geometry of the space.
Inference time often matters more than raw accuracy. In many real-world systems, predictable latency is more valuable than squeezing out 0.5% extra accuracy.
Memory footprint is a first-class concern. Nearest neighbor methods store the dataset; this forces you to think carefully about representation and pruning.
GBMs are not “just models.” They’re systems. After studying gradient boosting more closely, I started seeing it less as a single model and more as a structured system with layered feature selection, residual fitting, and region partitioning. That perspective changed how I thought about improving KNN.
I began experimenting with:
Learned feature weighting to reduce noise.
Feature pruning to reduce dimensional effects.
Vectorized distance computation on CPU.
Integrating approximate neighbor search while preserving final exact scoring.
Structuring the algorithm more like a deployable system rather than a classroom algorithm.
One big realization: no model dominates under every dataset and constraint. There is no universal winner. Performance depends heavily on feature quality, data size, dimensionality, and latency requirements.
Building this forced me to think less about “which algorithm is best” and more about:
What constraints does production impose?
Where is the real bottleneck: compute, memory, or data geometry?
How do we balance accuracy, latency, and simplicity?
I’m still exploring this space and would really appreciate feedback from people who’ve worked on large-scale similarity search or production ML systems.
If anyone has suggestions on:
Better CPU vectorization strategies,
Lessons from deploying nearest-neighbor systems at scale,
Or papers I should study on metric learning / scalable distance methods,
I’d love to learn more.
I’ve put the current implementation on GitHub for anyone curious, but I’m mainly interested in discussion and technical feedback.
You say 'production ready'.
This project is definitely AI-generated (at least the README is) so how have you ground-truth'd this statement?
That’s a fair question... I wrote the implementation and experiments myself. I did use an LLM to refine and structure the README for clarity, but the design, benchmarking, and validation are my own... By (production ready), I mean the system has been validated beyond just accuracy metrics. It has been benchmarked against GBMs and linear models under the same settings for both regression and classification, with competitive results. I’ve also measured batch and single-query latency, including p95 inference time, and tested memory usage under CPU only constraints. It’s been scale-tested into the low millions of samples on limited RAM, with stable behavior across multiple runs and consistent accuracy. And it’s not yet deployed in a live environment this post is partly to gather feedback.. but the claim is based on reproducibility, API stability, deterministic inference, and performance validation. If you think there are additional criteria I should meet before calling it production-ready, I’d genuinely appreciate the feedback..
The Future of AI
https://lucijagregov.com/2026/02/26/the-future-of-ai/Agree with many of the points. However one at the root of it all seems easily definable - if we only want.
> we can’t agree on a shared ethical framework among ourselves
The Golden Rule: the principle of treating others as you would like to be treated yourself. It is a fundamental ethical guideline found in many religions and philosophies throughout history so there is already a huge consensus across time and cultures around it.
I never found anyone successfully argue against it.
PS: the sociopath argument is not valid, since it's just an outlier. Every rule has it's exceptions that need to be kept in check. Even though sometimes I think maybe the state of the world attests to the fact that the majority of us didn't successfully keep the sociopathic outliers in check.
She's probably happier than you though.
You’re assuming people have similar desires.
Even in human relations it’s dangerous. I for one don’t want to be treated the same way someone into BDSM wants to be treated. I don’t want to avoid cooking or turning the lights on (or off!) on a Friday night but others are quite happy with that.
If you assign that morality to a species that isn’t the same as you that’s a problem. My guinea pig wants nothing more from like than hay, nuggets, sole room to run around and some shelter from scary shapes. If they were in charge of the world life would be very different.
“Live and let live” might be a similar theme but not as problematic, but then how do you define “living”. You can keep someone alive for decades while torturing them.
How about allowing freedom? Well that means I’m free to build a nuclear bomb. And set it off where I want. We see today especially that type of freedom isn’t really liked.
OpenAI fires an employee for prediction market insider trading
https://www.wired.com/story/openai-fires-employee-insider-tradin...Prediction market, either gambling or inside trading.
Wouldn't those prediction markets be more efficient if positions were associated with people's real names?
Like, a 100k wager from a finance dude carries some information, but a 10k wager from a staffer says a lot more!
that's right prole! only Congress has that privilege!
The United States and Israel have launched a major attack on Iran
https://www.cnn.com/2026/02/28/middleeast/israel-attack-iran-int...Regardless of how it ends, and it can go both ways, we're witnessing history here. This feels like a much bigger development than Russia-Ukraine. Iran is a major partner for Russia and China, mostly for military technology and oil. Hope it's not a start of WW3.
Putin said it himself, there are over 2 million russians in Israel - they will not participate
thats definitely not the reason they wont participate. Its just a public excuse
OpenAI agrees with Dept. of War to deploy models in their classified network
https://twitter.com/sama/status/2027578652477821175https://xcancel.com/sama/status/2027578652477821175
https://fortune.com/2026/02/27/openai-in-talks-with-pentagon...
Raise your hand if you actually read it or if you read the title and replied? I see a lot of comments that sure seem like they didn’t read it.
> Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
IF this is true, it SHOULD be verifiable. So, we wait? I mean, I am a dummy, but that language doesn’t seem too washy too me? Either it’s a bold face lie and OpenAI burns because of it or it’s true and the Trump admin is going after the “left” AI company. Or whatever. My point is, someone smarter than me/us is going to fact check Sam’s claim.
Do you know who isn't a dummy? Sam. The crucial part of that statement is that the DoD will use OpenAI systems "lawfully and responsibly," which I don't doubt is written somewhere in their contract. However, those terms are so open-ended that it's impossible for OpenAI to enforce. Sam could have clarified in his tweet that they explicitly prohibited the use of their technology for mass surveillance and autonomous killings, but he deliberately chose not to and to simply say, "We told them not to do bad things." which smells like bullshit
I like the idea of seeing someone post “I dislike and distrust Sam Altman” and thinking “They must be saying that because they haven’t read the things that he writes”
Don't use passkeys for encrypting user data
https://blog.timcappalli.me/p/passkeys-prf-warning/The Life Cycle of Money
https://doap.metal.bohyen.space/blog/post/complete-life-cycle-of...Not an expert, but amazingly this all looks correct?
Looks like it, yes. It's encouraging given that so many discussions of these topics online are wrong. The explanation of constraints on bank lending in particular is something many people should read.
If it looks correct, it must be then.
More seriously, if you want to learn money and its infrastructure, I recommend Banque de France's book on the matter "Payments and market infrastructures in the digital era" https://www.banque-france.fr/system/files/2023-04/payments_m...
Qwen3.5 122B and 35B models offer Sonnet 4.5 performance on local computers
https://venturebeat.com/technology/alibabas-new-open-source-qwen...But dos it know about Tiananmen Square and other things?
Yes.
Are there any non-Chinese open models that offer comparable performance?
Smallest transformer that can add two 10-digit numbers
https://github.com/anadim/AdderBoard>=99% accuracy wtf?!?
I was initially excited until i saw that, because it would reveal some sort of required local min capacity, and then further revelation that this was all vibe coded and no arXiv, makes me feel I should save my attn for another article.
I get that this is technically interesting, for certain, but the sheer amount of energy and associated global warming risk needed to do something with >=99% accuracy that we've been able to do easily for decades with a guaranteed 100% accuracy seems to me to be wasteful to the extreme.
You need to recalibrate your sense of scale if you think that this is a geologically relevant usage of energy.
Don't trust AI agents
https://nanoclaw.dev/blog/nanoclaw-security-modelDo you trust your employees? Do you trust a contracter? Do you trust other people?
AI is similar to a person you dont know that does work for you. Probably AI is a bit more trustworthy than a random person.
But a company, needs to let employees take ownership of their work, and trust them. Allow them to make mistakes.
Isnt AI no different?
My point is: Trust the work of AI just like the work of a contracter: Check and verify, but dont micromanage.
Exactly, and I would never turn over my email or computer over to a contractor or anyone really. They get their own environment, email etc. Their actions stay as their actions.