• 3 Posts
  • 126 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle



  • Die Art Erklärung kommt ja jetzt öfter, aber ich finde das ist etwas zu schnell geschossen. Also, natürlich nicht falsch, aber zu sehr auf einen Aspekt eingeschossen.

    Erstens, der Carnot-Wirkungsgrad gilt wie üblich in der Physik für stark idealisierte Systeme die überhaupt nichts anderes tun. Also keine Interaktion mit der Umgebung etc. Da wären dann auch 80%-90% denkbar, aber in der Realität ist eine Kuh kein masseloser Punkt im Vakuum und reale Motoren haben entsprechend ca. 40%. Die sind also nicht von Zweiten Hauptsatz der Thermodynamik begrenzt sondern vorher noch erstmal von ganz anderen Dingen.

    Zweitens weiß ich nicht, wieso wir bei “effizient” überhaupt zuerst an den Wirkungsgrad des Motors denken. Klar kann man Autos effizienter, billiger und umweltfreundlicher machen indem man den Wirkungsgrad erhöht und dadurch mit weniger Treibstoff die gleiche Arbeit verrichtet. In der Realität haben wir Autos aber zu einem großen Teil mit besserer Aerodynamik effizienter gemacht. Und die hat mit dem Wirkungsgrad des Motors nichts zu tun. Oder mit der Art des Motors ansich.

    Auch energiedichtere Treibstoffe wären denkbar, oder leichtere Materialien, usw. In der Realität hat sich das halt nicht rechtzeitig ergeben bevor die Elektroautos überholt haben, aber am Wirkungsgrad des Motors alleine hing das auf jeden Fall nicht.

    Wenn der Wirkungsgrad des Motors alles entscheiden würde, könnten wir ja bei den Elektroautos jetzt aufgeben. Da ist der Wirkungsgrad ja schon hoch und mehr als 100% geht nicht, dementsprechend ist da eh nicht mehr viel zu holen. Das ist aber natürlich Quatsch, weil der Wirkungsgrad in der Realität eben nur einer von vielen Faktoren ist und nicht mal wirklich der wichtigste.

    Wo der Wirkungsgrad dagegen wirklich ein Totschlagargument ist, sind die E-Fuels. Da haben wir einen direkten Vergleich was ein Elektroauto mit X Menge umweltfreundlich erzeugten Strom an Arbeit leistet, vs den Verbrenner mit Treibstoff der mit der selben Menge Strom hergestellt wurde. Weil da nicht nur der Wirkungsgrad vom Motor schlecht ist, sondern auch bei der Produktion des Treibstoffs der Wirkungsgrad schlecht ist, fällt der Vergleich natürlich katastrophal aus zu Ungunsten der E-Fuels.


  • I had some video glasses ages ago that could do that too. Like 15 years ago. I can’t recall a single game without problems. UI was the biggest issue. Often UI elements were at nonsensical 3D positions, and while you wouldn’t notice this on a normal screen, the glasses tried to render them in the center of my brain…

    And before that I had an nVidia graphics card in the late 90ies that came with shutter glasses. The driver could do stereo for “everything” too, however for me “everything” was one game where I could get it to work.





  • I don’t really see a problem with it either. I pay more in some other countries too as a tourist. Here it’s framed as making tourists pay more, but it could also be framed as keeping the museum accessible for your population which does not necessarily have the same budget for museums as an international tourist on the trip of their life.

    But: Tourists absolutely do pay taxes. There are accommodation taxes on hotel stays (in France this can be up to ~15 Euro per person per night), they pay consumption taxes like VAT, and there are arrival and departure taxes or airport taxes.










  • I don’t understand how people can look at the insane progress gpt has made in the last 3 years and just arbitrarily decide that this is its maximum capability.

    So this is not entirely arbitrary, and probably part of it is also that they’re not just looking at the progress, but also at systemic issues.

    For example we know that larger models with more training material are more powerful. That’s probably the biggest contributing factor to the insane pace at which they’ve developed. But we’re also at a point where AI companies are saying they are running out of data. The models we have now are already trained on basically the entire open internet and a lot of non-public data too. Therefore we can’t expect their capabilities to scale with more data unless we find ways to get humans to generate more data. At the same time the quality of data on the open internet decreases because more of it is generated by AI.

    On the other hand, making them larger also has physical requirements, most of all power. We are already at a point where AI companies are buying nuclear power plants for their data centers. So scaling in this way is close to the limit too. Building new nuclear power plants takes ages.

    Another different thing is that LLMs can’t learn. They don’t have to be able to learn to be useful, obviously we can use the current ones just fine at least for some tasks. But nonetheless this is something that limits the progress that’s possible for them.

    And then there is the entire AI bubble thing. The economical side of things, where we have an entire circular economy based on the idea that companies like OpenAI can spend billions on data centers. But they are losing money. Pretty much none of the AI companies are profitable other than the ones that only provide the infrastructure. Right now investors are scared enough to miss out on AGI to continue investing but if they stopped, it would be over.

    And all this is super fragile. The current big players are all using the same approach. If one company makes that next step and finds a better approach than transformer LLMs, the others are toast. Or if some Chinese company makes a breakthrough with energy usage again. Or if there is a hardware breakthrough and the incentive to pay for hosted LLMs goes away. Basically even progress can pop the bubble because if we can all run AI that does a good enough job at home then the AI companies will never hit their revenue targets. And then the investment stops and companies that bleed billions every quarter without investors backing them can die very quickly.

    Personally I don’t think they will stop becoming better right now. Even if they do stop, I’m not convinced we understand them well enough to be unable to improve the ways in which we use them a bit more. But when people say that this is the peak, they’re looking at the bigger picture. They say that LLMs can’t get closer to human intelligence because fundamentally, we don’t have a way to make them learn, they say that the development model is not sustainable, and other reasons like that.