

Why play chess with Moriarty when you can just bash him in the head with a chessboard?
Why play chess with Moriarty when you can just bash him in the head with a chessboard?
I keep hearing good things however I have not yet seen any meaningful results for the stuff I would use such a tool for.
I’ve been working on network function optimization at hundreds of gigabit per second for the past couple of years. Even with MTU-sized packets you are only given approximately 200 ns for processing (this assumes without batching). Optimizations generally involve manual prefetching and using/abusing NIC offload features to minimize atomic instructions (this is also running on arm, where atomic fetch and add in gcc is compiled into a function that does lw, ll, sc and takes approximately 8 times the regular memory access time for a write). Current AI assisted agents cannot generate efficient code that runs at line rate. There are no textbooks or blogs that give a detailed explanation of how these things work. There are no resources for it to be trained on.
You’ll find a similar problem if you try to prompt them to generate good RDMA code. At best you’ll find something that barely works, and almost always of the code cannot efficiently utilize the latency reduction RDMA provides over traditional transport protocols. The generated code usually looks like how a graduate CS student may think RDMA works, but is usually completely unusable, either requiring additional PCIe round-trips or has severe thrashing issues with main memory.
My guess is that these tools are ridiculously good at stuff it can find examples of online. However for stuff that have no examples, it is woefully under prepared and you still need a programmer to manually do the work line by line.
As much as I hate the concept, it works. However:
It only works with generalized programming. (E.g. write a python script that passes csv files) For any specialized fields this would NOT work (e.g. write a DPDK program that identifies RoCEv2 packets and rewrite the IP address)
It requires the human supervising the AI agent to know how to write the expected code themselves, so they can prompt the agent to use specific techniques (e.g. use python’s csv library instead of string.split). This is not a problem now since even programmers out of college generally know what they are doing.
If companies try to use this to avoid hiring/training skilled programmers, they will have a very bad time in the future when the skilled talent pool runs dry and nobody knows how to identify correct vs incorrectly written code.
There’s also changing from circuit to packet switching, which also drastically changes how the handover process works.
tl;Dr - handover in 5G is buggy and barely works. The whole thing of switching from one service area to another in the middle of a call is held together by hopes and dreams.
Somehow I disagree with both the premise and the conclusion here.
I dislike a direct answer to things as it discourages understanding. What is the default memory allocation mechanism in glibc malloc? I could get the answer sbrk() and mmap() and call it a day, but I find understanding when it uses mmap instead of sbrk (since sbrk isn’t numa aware but mmap is) way more useful for future questions.
Meanwhile, Google adding a tab for AI search is helpful for people who want to use just AI search. It doesn’t take much away from people doing traditional web searches. Why be mad about this instead of the other true questionable decisions Google is doing?
Personally I just want another RtwP CRPG.
I loved PoE1, didn’t care much about PoE2, and will probably care less about Avowed. There’s something magical about a map full of tiles that aren’t revealed immediately compared to a world map that you can immediately tell how much has been explored.
Same thing for BG3. I love Larian (been a Kickstarter backer since the original D:OS days, been playing almost every one of their games on release day since Dragon Commander) and BG3’s a great RPG, but it doesn’t feel like a good BG game. BG2 gave an immediate sense of “I have no idea where to go so I can do whatever I want”. BG3 is always nudging you to uncover the map and clear all the quests.
Nope. Plenty of people want this.
In the last few years I’ve seen plenty of cases where CS undergrad students get stumped if ChatGPT is unable to debug/explain a question to them. I’ve literally heard “idk because ChatGPT can’t explain this lisp code” as an excuse during office hours.
Before LLMs, there were also a significant amount of people who used GitHub issues/discord to ask simple application usage questions instead of Googling. There seems to be a significant decrease of people’s willingness to search for an answer regardless of AI tools existing.
I wonder if it has to do with weaker reading comprehension skills?
Because it’s in a genre that has no good alternatives?
EVE is spreadsheet simulator, Elite Dangerous is space-truck simulator, NMS is all planets not space, StarField is StarField.
The only viable alternative I found was X4. Even that is slightly different from what Star Citizen promises (it’s more empire management than solo flying in the endgame, vanilla balance is also questionable: you can “luke skywalker” a destroyer with a scout with pure dogfighting skills)
Agreed. Personally I think this whole thing is bs.
A routine that just returns “yes” will also detect all AI. It would just have an abnormally high false positive rate.
Not sure about GreaseMonkey, but V8 compiles JS to an IL.
Nodejs has an emit IL debugging feature to see the emitted IL code.
How much of that is cached state based on the percentage of ram available?
An alternative definition: a real-time system is a system where the correctness of the computation depends on a deadline. For example, if I have a drone checking “with my current location + velocity will I crash into the wall in 5 seconds?”, the answer will be worthless if the system responds 10 seconds later.
A real-time kernel is an operating system that makes it easier to build such systems. The main difference is that they offer lower latency than a usual OS for your one critical program. The OS will try to give that program as much priority as it wants (to the detriment of everything else) and immediately handle all signals ASAP (instead of coalescing/combining them to reduce overhead)
Linux has real-time priority scheduling as an optional feature. Lowering latency does not always result in reduced overhead or higher throughout. This allows system builders to design RT systems (such as audio processing systems, robots, drones, etc) to utilize these features without annoying the hell out of everyone else.
Base it off of total sqft?
I’m struggling to see how someone would need a combined 40000 sqft of residential living space either…
Yeah I completely forgot about the consumer side of things. I was expecting there being Cisco iOS/FRR router configs, not a full web dashboard.
As someone who works with 100Gbps networking:
Nah, just grab the domain and redirect it to X. Watch him explode.
How good are the RISC-V vector instructions implementations IRL? I’ve never heard of them. My experience with ARM is that even on certain data center chips the performance gains are abyssal (when using highly optimized libraries such as dpdk)
Tail latency with a swallow tail.