• 0 Posts
  • 90 Comments
Joined 3 months ago
cake
Cake day: January 26th, 2025

help-circle
  • The thing is - wayland does kind of prevent it by forcing the GPU into the rendering pipeline far harder than Xorg. The GPU-assumptions throughout the code base(s) makes latency shoot through the roof when running software rendered. If you want decent latency, you need a GPU, and if you want to run multiuser you are going to pay Nvidia a shitton of money.

    I can also imagine it’s hard (impossible?) to do performant damage tracking in a VNC server without implementing at least parts of the VNC server inside the compositor. This means that the compositor and VNC server gets tightly coupled by necessity. Choice will be limited. Would you like the bad DE with the good VNC server or the good DE with the bad VNC server? Bad damage tracking means shit latency and high bandwidth usage, or other tradeoffs. So even if someone managed to implement what I want on Wayland, it would most likely be limited to a single compositor and not a general solution allowing a free choice of compositor.

    Best software suite I know of for it is Cendio Thinlinc, on top of TigerVNC. Free for up to 5 users. There are some others in the same niche. My recommendation would be to try Thinlinc on Rocky 9 or Ubuntu 24, and configure it to use XFCE. Mate, KDE, or Cinnamon, all work fine. Turn off compositing! Over a good WAN-link it feels mostly local unless playing fullscreen videos. On a LAN-link, the only thing giving it away is extra tearing and compression artifacts when playing youtube-videos fullscreen. Compared to many others solutions I have tried, the latency and ”immersion” is incredible.

    As for me, I’ll try to never manage linux desktop fleets or remote desktops again.


  • enumerator4829@sh.itjust.workstoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    16 days ago

    What I’ve seen of rustdesk so far is that it’s absolutely not even close to the options available for X. It replaces TeamViewer, not thin clients.

    You would need the following to get viability in my eyes:

    • Multiple users per server (~50 users)
    • Enterprise SSO authentication, working kerberos on desktop
    • Good and easily deployable native clients for Windows, Linux and Mac, plus html5 client
    • Performant headless software rendered desktops
    • GPU acceleration possible but not required
    • Clustering, HA control plane, load balancing
    • Configuration management available

    This isn’t even an edge case. Current and upcoming regulations on information security drags the entire industry this way. Medical, research, defence, banking, basically every regulated landscape gets easier to work in when going down this route. Close to zero worries about endpoint security. Microsoft is working hard on this. It’s easy to do with X. And the best thing on Wayland is RustDesk? As stated earlier, these issues were brought up and discarded as FUD in 2008, and here we are.

    Wayland isn’t a better replacement, after 15 years it’s still not a replacement. The Wayland implementations certainly haven’t been rushed, but the architecture was. At this point, fucking Arcan will be viable before Wayland.


  • Exactly my point. The issues people consider ”solved” with wayland today will be solved in production in 3-5 years.

    People are still running RHEL 7, and Wayland in RHEL 9 isn’t that polished. In 4-5 years when RHEL 10 lands, it might start to be usable. Oh right, then we need another few years for vendors to port garbage software that’s absolutely mission critical and barely works on Xorg, sure as fuck won’t work in xwayland. I’m betting several large RHEL-clients will either remain on RHEL8 far past EOL or just switch to alternative distros.

    Basically, Xorg might be dead, but in some (paying commercial) contexts, Wayland won’t be a viable option within the next 5-10 years.



  • Please note that the nominal FLOP/s from both Nvidia and Huawei are kinda bullshit. What precision we run at greatly affect that number. Nvidias marketing nowadays refer to fp4 tensor operations. Traditionally, FLOP/s are measured with fp64 matrix-matrix multiplication. That’s a lot more bits per FLOP.

    Also, that GPU-GPU bandwidth is kinda shit compared to Nvidias marketing numbers if I’m parsing correctly (NVLink is 18x 10GB/s links per GPU, big ’B’ in GB). I might read the numbers incorrectly, but anyway. How and if they manage multi-GPU cache coherency will be interesting to see. Nvidia and AMD both do (to varying degrees) have cache coherency in those settings. Developer experience matters…

    Now, the real interesting thing is power draw, density and price. Power draw and price obviously influence TCO. On 7nm, I guess the power bill won’t be very fun to read, but that’s just a guess. The density influences network options - are DAC-cables viable at all, or is it (more expensive) optical all the way?


  • There is actually less to ’xkill’. It nukes the X window from orbit in a very violent manner. The owning process(-tree) will usually just instantly curl up and die.

    The main benefit is that it doesn’t actually kill the process, it only nukes the window. As such, you can get rid of windows belonging to otherwise unkillable processes (zombies, etc).

    Also, it’s fun. Just don’t miss the window and accidentally kill your WM. (Beat that Wayland)




  • enumerator4829@sh.itjust.workstoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    17 days ago

    Now consider that most enterprises are about five years behind that. Takes a few years before what’s available in Fedora trickles down to RHEL, and a few more years before it’s rolled out to clients. Ubuntu is on a similar timeline.

    The fixes you got two years ago might be rolled out in 3 years in these places. Oh, and these are the people forking up much of the money for the Wayland development efforts. The current state of Wayland if you pay for it is kinda meh.


  • enumerator4829@sh.itjust.workstoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    7
    ·
    17 days ago

    I’ll bite. It’s getting better, but still a long way to go.

    • No commercially viable remote desktop or thin client solutions. I’m not talking about just VNC, take a look at for example ThinLinc to see what I’m looking for - a complete solution. (Also, it took like ten rough years before basic unencrypted single user VNC was available at all.) Free multimillion dollar business idea right here folks!
    • Related to the above point - software rendered wayland is painful. To experience this yourselves, install any distro in VirtualBox or VMWare or whatever and compare the usability between a Xorg DE (with compositing turned off) and the same Wayland DE. Just look at the click-to-photon latency and weep. I’ve seen X11 perform better with VNC over WAN.
    • ”We don’t need network transparency, VNC will save us”. See points above.
    • ”Every frame is perfect” went just as well as can be expected, there is a reason VSYNC is an option in games and professional graphics applications. Thanks Valve.
    • I’m assuming wlroots still won’t work on Nvidia, and that the Gnome/KDE implementations are still a hodgepodge, and that Nvidia will still ask me to install the supported Xorg drivers. If I’m wrong, it only took a decade or so to get a desktop working on hardware from the dominant GPU vendor. (Tangentially related - historically the only vendor with product lines specifically for serving GPU-accelerated desktops to thin clients)
    • After over a decade of struggles, we can finally (mostly) share out screens in Zoom. Or so I’m told.

    But what do I know, I’ve only deployed and managed desktop linux for a few thousand people. People were screaming about these design flaws back in 2008 when this all started. The criticisms above were known and dismissed as FUD, and here we are. A few architectural changes back then, and we could have done this migration a decade faster. Just imagine, screen sharing during the pandemic!

    As an example, see Arcan, a small research project with an impressively large subset of features from both X11 and Wayland (including working screen sharing, network transparency and a functioning security model). I wouldn’t use it in production, but if it was more than one guy in a basement working on it, it would probably be very usable fairly fast, compared to the decade and half that RedHat and friends have poured into Wayland thus far. Using a good architecture from the start would have done wonders. And Wayland isn’t even close to a good architecture. It’s just what we have to work with now.

    Hopefully Xorg can die at some point, a decade or so from now. I’m just glad I don’t work with desktops anymore, the swap to Wayland will be painful for a lot of organisations.






  • Here be dragons. But basically:

    • Run a VM from contents of a physical disk: use ’dd’ to create disk image. If on linux, try to boot and fix all the errors, hopefully few.

    • Run VM as physical machine: other way around.

    You won’t find this in a tutorial. You need to understand concepts, read manuals, fit everything together, execute, fail and retry until it works.

    For Windows, I have no idea. Conceptually, I figure it’s similar.






  • You assume a uniform distribution. I’m guessing that it’s not. The question isn’t ”Does the model contain compressed representations of all works it was trained on”. Enough information on any single image is enough to be a copyright issue.

    Besides, the situation isn’t as obviously flawed with image models, when compared to LLMs. LLMs are just broken in this regard, because it only takes a handful of bytes being retained in order to violate copyright.

    I think there will be a ”find out” stage fairly soon. Currently, the US projects lots and lots of soft power on the rest of the world to enforce copyright terms favourable to Disney and friends. Accepting copyright violations for AI will erode that power internationally over time.

    Personally, I do think we need to rework copyright anyway, so I’m not complaining that much. Change the law, go ahead and make the high seas legal. But set against current copyright laws, most large datasets and most models constitute copyright violations. Just imagine the shitshow if OpenAI was an European company training on material from Disney.