Asahi is awesome!
But this is also proves that laptops outside the MacBook realm really need to improve so much. I wish there were a Linux machine with the hardware quality of a MacBook
* x86 chips can surpass the M series cpus in multithreaded performance, but are still lagging in singlethreaded performance and power efficiency
* Qualcomm kinda fumbled the Snapdragon X Elite launch with nonexistent Linux support and shoddy Windows stability, but here's to hoping that they "turn over a new leaf" with the X2.
Actually, some Snapdragon X Elite laptops do run Linux now, but performance is not great as there were some weird regressions and anyway newer chips have caught up [1].
On the build quality side, basically all the PCs are still lagging behind Apple, e.g. yesterday's rant post about the Framework laptop [2] touched on a lot of important points.
Of course, there are the Thinkpads, which are still built decently but are quite expensive. Some of the Chinese laptops like the Honor MagicBooks could be attractive and some reddit threads confirm getting Linux working on them, but they are hard to get in the US. That said, at least many non-Apple laptops have decent trackpads and really nice screens nowadays.
I am giving my MacBook Air M2 15” to my wife and bought a Lenovo E16 with 120hz screen to run Kubuntu last night. She needed a new laptop and I am had enough of macOS and just need some stuff to work that will be easier on an intel and Linux. Also I do bookwork online so bigger screen and dedicated numpad will be nice.
It reviews well and seems like good value for money with current holiday sales but I don’t expect the same hardware quality or portability just a little more freedom. I hope I’m not too disappointed.
https://www.notebookcheck.net/Lenovo-ThinkPad-E16-G3-Review-...
I outfitted our 10 person team with the E16 g2 and it’s been great.
Two minor issues- it’s HEAVY compared to T models.
Because of the weight try not to walk around with the lid up and holding it from one of front corners. I’ve noticed one of them is kind of warped from walking around the office holding it that way.
That’s great news thanks. I got the gen 3 so maybe some improvements. Weight is ok as I really just move it around the house. I buy used Panasonics for the workshop.
Seems like a crazy hobby to me though! Photography is inconvenient enough without having to make your own mounts and use an sdk to do it! History is filled with inconvenient hobbies though.
I would agree with the sentiment about the lack of good bright screens for lenovo's hacker laptops like the X1 carbon.
>I am very impressed with how smooth and problem-free Asahi Linux is. It is incredibly responsive and feels even smoother than my Arch Linux desktop with a 16 core AMD Ryzen 7945HX and 64GB of RAM.
Hmmm still have issue with the battery in sleep mode on the m1. It drains a lot battery when it is in sleep mode compare to mac sleep mode.
You know what OS doesn’t handle the notch? OSX. It happily throws the system tray icons right back there, with an obscure work around to bring them back. Software quality at Apple these days…
A new Wayland protocol is in the works that should support screen cutout information out of the box: https://phosh.mobi/posts/xdg-cutouts/ Hopefully this will be extended to include color information whenever applicable, so that "hiding" the screen cutout (by coloring the surrounding area deep black) can also be a standard feature and maybe even be active by default.
You can't be serious. Wayland is the opposite of modular, and the concept of an extensible protocol only creates fragmentation.
Every compositor needs to implement the giant core spec, or, realistically, rely on a shared library to implement it for them. Then every compositor can propose and implement arbitrary protocols of their own, which should also be supported by all client applications.
It's insanity. This thing is nearly two decades old, and I still have basic clipboard issues[1]. This esoteric cutouts feature has no chances of seeing stable real-world use in at least a decade from now.
Shh...you're not supposed to mention these things alas you be down voted to death.
I also have tremendous issues with Plasma. Things such as graphics glitching in the alt+tab task switcher or Firefox choking the whole system when opening a single 4k PNG image. This is pre-alpha software... So back to X11 it is. Try again in another decade or two.
YMMV and all, but my experience is that Wayland smoothness varies considerably depending on hardware. On modernish Intel and AMD iGPUs for example I’ve not had much trouble with Wayland whereas my tower with an Nvidia 3000 series card was considerably more troublesome with it.
Generally true, though this particular case is due to a single company deciding to not play ball and generally act in a manner that's hostile to the FOSS world for self-serving reasons (Nvidia).
I don't even think it's even that. These bugs seem like bog standard bugs related to correct sharing of graphics resources between processes and accessing with correct mutual exclusion.Blaming NV is likely just a convenient excuse.
The thing is that I'm not experiencing this clipboard issue on Plasma, but on a fresh installation of Void Linux with niri. There are reports of this issue all over[1][2][3], so it's clearly not an isolated problem. The frustrating thing is that I wouldn't even know which project to report it to. What a clusterfuck.
I can't go back to X11 since the community is deliberately killing it. And relying on a fork maintained by a single person is insane to me.
You can see the same problem in the XMPP world, with a lot of the extensions implemented only by a few applications. But at least most XMPP extensions are designed to be backwards-compatible with clients that don't support them.
Each controller and subcomponent on the motherboard needs a driver that correctly puts it into low power and sleep states to get battery savings.
Most of those components are proprietary and don't use the standard drivers available in Linux kernel.
So someone needs to go and reverse engineer them, upstream the drivers and pray that Apple doesn't change them in next revision (which they did) or the whole process needs to start again.
In other words: get an actually Linux supported laptop for Linux.
One of my favorite machines was the MacBook Air 11 (2012). This was a pure Intel machine, except for a mediocre Broadcom wireless card. With a few udev rules, I squeezed out the same battery performance from Linux I got from OS X, down to a few minutes of advantage in favor of Linux. And all this despite Safari being a marvel of energy efficiency.
The problem with Linux performance on laptops boils down to i) no energy tweaks by default and ii) poor device drivers due to the lack of manufacturer cooperation. If you pick a machine with well supported hardware and you are diligent with some udev rules, which are quite trivial to write thanks to powertop suggestions, performance can be very good.
I am getting a bit more than 10 hours from a cheap ThinkPad E14 Gen7, with a 64 Wh battery, and light coding use. That's less than a MacBook Air, where I would be getting around 13-14 hours, but it's not bad at all. The difference comes mainly from the cheap screen that is more power consuming and ARMs superior efficiency when idling.
But I prefer not to trade the convenience and openness of x86_64 plus NixOS for a bit more battery range. IMHO, the gap is not sufficiently wide to make a big difference in most usage scenarios.
The need to tweak that deeply just to get “baseline” performance really stings, though, particularly if you’re not already accustomed to having to do that kind of thing.
It’d be a gargantuan project, but there should probably be some kind of centralized, cross-distro repository for power configuration profiles that allows users to rate them with their hardware. Once a profile has been sufficiently user-verified and is well rated, distro installers could then automatically fetch and install the profile as a post-install step, making for a much more seamless and less fiddly experience for users.
Yet. Plenty of people have with Intel ones - I’m one of them. My first experience with Linux was on a 2016 MBpro. And inevitably people will do the same with the silicon Macs, likely using Asahi it seems.
That's an admirable goal, but, depending on the hardware, it can run into that pesky thing called reality.
It's getting very tiresome to hear complaints about things that don't work on Linux, only to find that they're trying to run it on hardware that's poorly supported, and that's something they could have figured out by doing a little research beforehand.
Sometimes old hardware just isn't going to be well-supported by any OS. (Though, of course, with Linux, older hardware is more likely to be supported than bleeding-edge kit.)
This is very true. I've been asked by lots of people "how do I start with Linux" and, despite being 99.9% Linux user for everything everyday, my advice was always:
1. Use VirtualBox. Seriously, it won't look cool, but it will 100% work after maybe 5 mins mucking around with installing guest additions. Also snapshots. Also no messing with WiFi drivers or graphics card drivers or such.
2. Get a used beaten down old Thinkpad that people on Reddit confirm to be working with Linux without any drivers. Then play there. If it breaks, reinstall.
3. If the above didn't make you yet disinterested, THEN dual boot.
Also, if you don't care about GUI, then use the best blessing Microsoft ever created - WSL, and look no further.
I've never gotten along too well with virtualization, but would second the ThinkPad idea, or something similar. Old/cheap machine for tinkering is a good way to ease in, and I think bare metal feels more friendly.
I'd probably recommend against dual booting, but I understand it's controversial. I like to equate it to having two computers, but having to fully power one off to do anything* on the other one. Torrents stop, music collection may be inaccessible depending on how you stored it, familiar programs may not be around anymore. I dual booted for a few years in the past and I found it miserable. People who expected me to reboot to play a game with them didn't seem to understand how big of an ask that really was. Eventually things boiled over and I took the Windows HDD out of that PC entirely. Much more peaceful. (Proton solves that particular issue these days also)
That being said, I've had at least two friends who had a dual boot due to my influence (pushing GNU/Linux) who ended up with some sort of broken Windows install later on and were happy to already have Ubuntu as an emergency backup to keep the machine usable.
*Too old might be a problem these days with major distros not having 32bit ISOs anymore
I've tried this once for IntelliJ to work around slow WSL access for Git repos. Was greeted by missing fonts and broken scaling on the intro screen. Oops. But probably I was just unlucky, it might work well for most.
For optimal battery life you need to tweak the whole OS stack for the hardware. You need to make sure all the peripherals are set up right to go into the right idle states without causing user-visible latency on wake-up. (Note that often just one peripheral being out of tune here can mess up the whole system's power performance. Also the correct settings here depend on your software stack). You need to make sure that cpufreq and cpuidle governors work nicely with the particular foibles of your platform's CPUs. Ditto for the task scheduler. Then, ditto for a bunch of random userspace code (audio + rendering pipeline for example). The list goes on and on. This work gets done in Android and ChromeOS.
Eh it's pretty awful. I get 8 hours, yes, but in Linux, those 8 hours are ticking whether my laptop is sleeping in my bag or on my desk with the lid closed or I'm actively using it. 8 hours of active use is pretty good, but 8 hours in sleep is absolutely dreadful.
Exactly. This myth keeps being perpetuated, for some reason.
I'm typing this from a ThinkPad X1 Carbon Gen 13 running Void Linux, and UPower is reporting 99% battery with ~15h left. I do have TLP installed and running, which is supposed to help. Realistically, I won't get around 15h with my usage patterns, but I do get around 10-12 hours. It's a new laptop with a fresh battery, so that plays a big role as well.
This might not be as good as the battery life on a Macbook, but it's pretty acceptable to me. The upcoming Intel chips also promise to be more power efficient, which should help even more.
Apple does tons of optimizations for every component to improve battery life.
Asahi Linux, which is reverse engineered, doesn't have the resources to figure out each of those tricks, especially for undocumented proprietary hardware, so it's a "death by a thousand cuts" as each of the various components is always drawing a couple of milliwatts more than on macOS.
This doesn't match my experience. My previous three laptops (two AMD Lenovo Thinkpads, one Intel Sony VAIO) had essentially the same battery life running Linux as running Windows.
Why? Lots of people more or less use their computer as a glorified web browser, with some zoom calls and document editing thrown in for good measure. 256gb seems overkill. My girlfriend is somehow still rocking a 2011 MacBook Air. She mostly just uses it for internet banking and managing her finances. Why would she want more than 256gb?
I think they mean that is 2025, 256GB is unreasonably small. Which is true, Apple wants to up-charge hundreds of dollars just to get to the otherwise standard 1TB drive.
From a supply perspective, 256GB seems ridiculous because you can get way more capacity for not very much money, and because 256GB is now nowhere close to enough flash chips operating in parallel to reach what is now considered high performance.
But from a demand perspective, there are a lot of PC users for whom 256GB is plenty of capacity and performance. Most computers sold aren't gaming PCs or professional workstations; mainstream consumer storage requirements (aside from gaming) have been nearly stagnant for years due to the popularity of cloud computing and streaming video.
Asahi is all reverse engineering. It’s nothing short of a miracle what has already accomplished, despite, not because of, Apple.
That said some of the prominent developers have left the project. As long as Apple keeps hoarding their designs it’s going to be a struggle, even more so now.
If you care about FOSS operating systems or freedom over your own hardware there isn’t a reason to choose Apple.
To be clear, the work the asahi folks are doing is incredible. I’m ashamed to say sometimes their documentation is better than the internal stuff.
I’ve heard it’s mostly because there wasn’t an m3 Mac mini which is a much easier target for CI since it isn’t a portable. Also, there have been a ton of hardware changes internally between M2 and M3. M4 is a similar leap. More coprocessors, more security features, etc.
For example, PPL was replaced by SPTM and all the exclave magic.
This is what ruffles my jimmies about this whole thing:
> I’m ashamed to say sometimes their documentation is better than the internal stuff.
The reverse engineering is a monumental effort, this Sisyphean task of trying to keep up with never-ending changes to the hardware. Meanwhile, the documentation is just sitting there in Cupertino. An enormous waste of time and effort from some of the most skilled people in the industry. Well, maybe not so much anymore since a bunch of them left.
I really hope this ends up biting Apple in the ass instead of protecting whatever market share they are guarding here.
I strongly support a projects stance that you shouldn't ask when it will be done. But the time between the M1 launch and a good experience was less than the time since M3 I would love to know what is involved.
That's an email from James Calligeros. All this patch says is that the author is Hector Martin (and Sven Peter). The code could have been written a long time ago.
The new project leadership team has prioritized upstreaming the existing work over reverse engineering on newer systems.
> Our priority is kernel upstreaming. Our downstream Linux tree contains over 1000 patches required for Apple Silicon that are not yet in upstream Linux. The upstream kernel moves fast, requiring us to constantly rebase our changes on top of upstream while battling merge conflicts and regressions. Janne, Neal, and marcan have rebased our tree for years, but it is laborious with so many patches. Before adding more, we need to reduce our patch stack to remain sustainable long-term.
> Last time, we announced that the core SMC driver had finally been merged upstream after three long years. Following that success, we have started the process of merging the SMC’s subdevice drivers which integrate all of the SMC’s functionality into the various kernel subsystems. The hwmon driver has already been accepted for 6.19, meaning that the myriad voltage, current, temperature and power sensors controlled by the SMC will be readable using the standard hwmon interfaces. The SMC is also responsible for reading and setting the RTC, and the driver for this function has also been merged for 6.19! The only SMC subdevices left to merge is the driver for the power button and lid switch, which is still on the mailing list, and the battery/power supply management driver, which currently needs some tweaking to deal with changes in the SMC firmware in macOS 26.
Also finally making it upstream are the changes required to support USB3 via the USB-C ports. This too has been a long process, with our approach needing to change significantly from what we had originally developed downstream
Very little progress made this year after high profile departures (Hector Martin, project lead, Asahi Lina and Alyssa Rosenzweig - GPU gurus). Alyssa's departure isn't reflected on Asahi's website yet, but it is in her blog. I believe she also left Valve, which I think was sponsoring some aspects of the Asahi project. So when people say "Asahi hasn't seen any setbacks" be sure to ask them who has stepped in to make up for these losses in both talent and sponsorship.
I have no insight into the Asahi project, but the LKML link goes to an email from James Calligeros containing code written by Hector Martin and Sven Peter. The code may have been written a long time ago.
Without official support, the Asahi team needs to RE a lot of stuffs. I’d expect it to lag behind a couple of generations at least.
I blame Apple on pushing out new models every year. I don’t get why it does that. A M1 is perfectly fine after a few years but Apple treats it like an iPhone. I think one new model every 2-3 years is good enough.
M1 is indeed quite adequate for most, but each generation has brought substantial boosts in performance in single-threaded, multi-threaded, and with the M5 generation in particular GPU-bound tasks. These advancements are required to keep pace with the industry and in a few aspects stay ahead of competitors, plus there exist high end users whose workloads greatly benefit from these performance improvements.
I agree. But Apple doesn’t sell new M1 chip laptops anymore AFAIK. There are some refurbished ones but most likely I need to go into a random store to find one. I only saw M4 and M5 laptops online.
That’s why I don’t like it as a consumer. If they keep producing M1 and M2 I’d assume we can get better prices because the total quantity would be much larger. Sure it is probably better for Apple to move forward quickly though.
In the US, Walmart is still selling the M1 MacBook Air new, for $599 (and has been discounted to $549 or better at times, such as Black Friday).
In general, I don't think it's reasonable to worry that Apple's products aren't thoroughly achieving economies of scale. The less expensive consumer-oriented products are extremely popular, various components are shared across product lines (eg. the same chip being used in Macs and iPads) and across multiple generations (except for the SoC itself, obviously), and Apple rather famously has a well-run supply chain.
From a strategic perspective, it seems likely that Apple's long history of annual iteration on their processors in the iPhone and their now well-established pattern of updating the Mac chips less often but still frequently is part of how Apple's chips have been so successful. Annual(ish) chip updates with small incremental improvements compounds over the years. Compare Apple's past decade of chip progress against Intel's troubled past decade of infrequent technology updates (when you look past the incrementing of the branding), uneven improvements and some outright regressions in important performance metrics.
> That’s why I don’t like it as a consumer. If they keep producing M1 and M2 I’d assume we can get better prices because the total quantity would be much larger.
Why would this be true? An M5 MacBook Air today costs the same as an M1 MacBook Air cost in 2020 or whenever they released it, and is substantially more performant. Your dollar per performance is already better.
If they kept selling the same old stuff, then you spread production across multiple different nodes and the pricing would be inherently worse.
If you want the latest and greatest you can get it. If an M1 is fine you can get a great deal on one and they’re still great machines and supported by Apple.
author mentions he paid $750 for a MacBook Air M2 with 16GB while on Amazon a M4 Air with 16GB is usually $750-800. I get it that M4/M3 aren't supported to boot Asahi yet, but still.
I really wanted this to work, and it WAS remarkably good, but palm rejection on the (ginormous) Apple trackpad didn't work at all, rendering the whole thing unusable if you ever typed anything.
That was a month ago, this article is a year old. I'd love to be wrong, but I don't think this problem has been solved.
Yeah what is up with that? When I've tried to look into it I've just been met with statements that palm rejection should pretty much just work, but it absolutely doesn't and accidental inputs are so bad it's unusable without a disable/enable trackpad hotkey.
All Firefox users should switch to librewolf. In the short term it’s for telling Mozilla to go f**, in the long term it’s a browser fork with with really good anti fingerprinting.
Note that librewolf rely on Mozilla tech infra for account synchronization and plugin distribution. If you are truly hostile to this organization, is there another browser you can recommend?
* x86 chips can surpass the M series cpus in multithreaded performance, but are still lagging in singlethreaded performance and power efficiency
* Qualcomm kinda fumbled the Snapdragon X Elite launch with nonexistent Linux support and shoddy Windows stability, but here's to hoping that they "turn over a new leaf" with the X2.
Actually, some Snapdragon X Elite laptops do run Linux now, but performance is not great as there were some weird regressions and anyway newer chips have caught up [1].
On the build quality side, basically all the PCs are still lagging behind Apple, e.g. yesterday's rant post about the Framework laptop [2] touched on a lot of important points. Of course, there are the Thinkpads, which are still built decently but are quite expensive. Some of the Chinese laptops like the Honor MagicBooks could be attractive and some reddit threads confirm getting Linux working on them, but they are hard to get in the US. That said, at least many non-Apple laptops have decent trackpads and really nice screens nowadays.
[1] https://www.phoronix.com/review/snapdragon-x-elite-linux-eoy...
[2] https://news.ycombinator.com/item?id=46375174
Two minor issues- it’s HEAVY compared to T models.
Because of the weight try not to walk around with the lid up and holding it from one of front corners. I’ve noticed one of them is kind of warped from walking around the office holding it that way.
Are you running windows?
For those curious about the Alkeria line-scan camera, he wrote a blog about 3d printing a lens mount etc. https://daniel.lawrence.lu/blog/2024-08-31-customizing-my-li...
Seems like a crazy hobby to me though! Photography is inconvenient enough without having to make your own mounts and use an sdk to do it! History is filled with inconvenient hobbies though.
I would agree with the sentiment about the lack of good bright screens for lenovo's hacker laptops like the X1 carbon.
Hmmm still have issue with the battery in sleep mode on the m1. It drains a lot battery when it is in sleep mode compare to mac sleep mode.
Every compositor needs to implement the giant core spec, or, realistically, rely on a shared library to implement it for them. Then every compositor can propose and implement arbitrary protocols of their own, which should also be supported by all client applications.
It's insanity. This thing is nearly two decades old, and I still have basic clipboard issues[1]. This esoteric cutouts feature has no chances of seeing stable real-world use in at least a decade from now.
[1]: https://bugs.kde.org/show_bug.cgi?id=466041
I also have tremendous issues with Plasma. Things such as graphics glitching in the alt+tab task switcher or Firefox choking the whole system when opening a single 4k PNG image. This is pre-alpha software... So back to X11 it is. Try again in another decade or two.
If my Ferrari has an issue with the brakes and I go to my dealer I don't care if the brakes were by Brembo.
Blaming the vendor and their drivers is just trying to shift the blame.
I can't go back to X11 since the community is deliberately killing it. And relying on a fork maintained by a single person is insane to me.
[1]: https://old.reddit.com/r/hyprland/comments/1d4s9bw/ctrlc_ctr...
[2]: https://old.reddit.com/r/tuxedocomputers/comments/1i9v0n7/co...
[3]: https://old.reddit.com/r/kde/comments/1jl6zv7/why_does_copyp...
Most of those components are proprietary and don't use the standard drivers available in Linux kernel.
So someone needs to go and reverse engineer them, upstream the drivers and pray that Apple doesn't change them in next revision (which they did) or the whole process needs to start again.
In other words: get an actually Linux supported laptop for Linux.
40% battery for 4hrs of real work is better than pretty much any linux supported laptop I've ever used
The problem with Linux performance on laptops boils down to i) no energy tweaks by default and ii) poor device drivers due to the lack of manufacturer cooperation. If you pick a machine with well supported hardware and you are diligent with some udev rules, which are quite trivial to write thanks to powertop suggestions, performance can be very good.
I am getting a bit more than 10 hours from a cheap ThinkPad E14 Gen7, with a 64 Wh battery, and light coding use. That's less than a MacBook Air, where I would be getting around 13-14 hours, but it's not bad at all. The difference comes mainly from the cheap screen that is more power consuming and ARMs superior efficiency when idling.
But I prefer not to trade the convenience and openness of x86_64 plus NixOS for a bit more battery range. IMHO, the gap is not sufficiently wide to make a big difference in most usage scenarios.
It’d be a gargantuan project, but there should probably be some kind of centralized, cross-distro repository for power configuration profiles that allows users to rate them with their hardware. Once a profile has been sufficiently user-verified and is well rated, distro installers could then automatically fetch and install the profile as a post-install step, making for a much more seamless and less fiddly experience for users.
It's generally the most optimized system down to the fact that Apple controls everything about it's platform.
If that's considered baseline, then nothing but full vertical integration can compete
I see how the GP comment could be provocative, but on this site we want responses that dampen provocation, not amplify it.
For a lot of people the point is to extend the life of their already-purchased hardware.
Why are some of y'all so hostile to this idea?
If your vendor is hostile like Apple, it will be hard to make it keep on working.
It's getting very tiresome to hear complaints about things that don't work on Linux, only to find that they're trying to run it on hardware that's poorly supported, and that's something they could have figured out by doing a little research beforehand.
Sometimes old hardware just isn't going to be well-supported by any OS. (Though, of course, with Linux, older hardware is more likely to be supported than bleeding-edge kit.)
This is very true. I've been asked by lots of people "how do I start with Linux" and, despite being 99.9% Linux user for everything everyday, my advice was always:
1. Use VirtualBox. Seriously, it won't look cool, but it will 100% work after maybe 5 mins mucking around with installing guest additions. Also snapshots. Also no messing with WiFi drivers or graphics card drivers or such.
2. Get a used beaten down old Thinkpad that people on Reddit confirm to be working with Linux without any drivers. Then play there. If it breaks, reinstall.
3. If the above didn't make you yet disinterested, THEN dual boot.
Also, if you don't care about GUI, then use the best blessing Microsoft ever created - WSL, and look no further.
I'd probably recommend against dual booting, but I understand it's controversial. I like to equate it to having two computers, but having to fully power one off to do anything* on the other one. Torrents stop, music collection may be inaccessible depending on how you stored it, familiar programs may not be around anymore. I dual booted for a few years in the past and I found it miserable. People who expected me to reboot to play a game with them didn't seem to understand how big of an ask that really was. Eventually things boiled over and I took the Windows HDD out of that PC entirely. Much more peaceful. (Proton solves that particular issue these days also)
That being said, I've had at least two friends who had a dual boot due to my influence (pushing GNU/Linux) who ended up with some sort of broken Windows install later on and were happy to already have Ubuntu as an emergency backup to keep the machine usable.
*Too old might be a problem these days with major distros not having 32bit ISOs anymore
2. If your priority is system lifespan, you are already using OEM macOS.
Every thread about Linux inevitably someone says “it gave new life to my [older computer model].” We’ve all seen it countless times.
I'm typing this from a ThinkPad X1 Carbon Gen 13 running Void Linux, and UPower is reporting 99% battery with ~15h left. I do have TLP installed and running, which is supposed to help. Realistically, I won't get around 15h with my usage patterns, but I do get around 10-12 hours. It's a new laptop with a fresh battery, so that plays a big role as well.
This might not be as good as the battery life on a Macbook, but it's pretty acceptable to me. The upcoming Intel chips also promise to be more power efficient, which should help even more.
[1] https://wiki.archlinux.org/title/TLP
Realistically, it is reasonable to expect 2TB drives, based on normal progression https://blocksandfiles.com/2024/05/13/coughlin-associates-hd...
But from a demand perspective, there are a lot of PC users for whom 256GB is plenty of capacity and performance. Most computers sold aren't gaming PCs or professional workstations; mainstream consumer storage requirements (aside from gaming) have been nearly stagnant for years due to the popularity of cloud computing and streaming video.
That said some of the prominent developers have left the project. As long as Apple keeps hoarding their designs it’s going to be a struggle, even more so now.
If you care about FOSS operating systems or freedom over your own hardware there isn’t a reason to choose Apple.
I’ve heard it’s mostly because there wasn’t an m3 Mac mini which is a much easier target for CI since it isn’t a portable. Also, there have been a ton of hardware changes internally between M2 and M3. M4 is a similar leap. More coprocessors, more security features, etc.
For example, PPL was replaced by SPTM and all the exclave magic.
https://randomaugustine.medium.com/on-apple-exclaves-d683a2c...
As always, opinions are my own
> I’m ashamed to say sometimes their documentation is better than the internal stuff.
The reverse engineering is a monumental effort, this Sisyphean task of trying to keep up with never-ending changes to the hardware. Meanwhile, the documentation is just sitting there in Cupertino. An enormous waste of time and effort from some of the most skilled people in the industry. Well, maybe not so much anymore since a bunch of them left.
I really hope this ends up biting Apple in the ass instead of protecting whatever market share they are guarding here.
https://lore.kernel.org/asahi/20251215-macsmc-subdevs-v6-4-0...
> Our priority is kernel upstreaming. Our downstream Linux tree contains over 1000 patches required for Apple Silicon that are not yet in upstream Linux. The upstream kernel moves fast, requiring us to constantly rebase our changes on top of upstream while battling merge conflicts and regressions. Janne, Neal, and marcan have rebased our tree for years, but it is laborious with so many patches. Before adding more, we need to reduce our patch stack to remain sustainable long-term.
https://asahilinux.org/2025/02/passing-the-torch/
For instance, in this month's progress report:
> Last time, we announced that the core SMC driver had finally been merged upstream after three long years. Following that success, we have started the process of merging the SMC’s subdevice drivers which integrate all of the SMC’s functionality into the various kernel subsystems. The hwmon driver has already been accepted for 6.19, meaning that the myriad voltage, current, temperature and power sensors controlled by the SMC will be readable using the standard hwmon interfaces. The SMC is also responsible for reading and setting the RTC, and the driver for this function has also been merged for 6.19! The only SMC subdevices left to merge is the driver for the power button and lid switch, which is still on the mailing list, and the battery/power supply management driver, which currently needs some tweaking to deal with changes in the SMC firmware in macOS 26.
Also finally making it upstream are the changes required to support USB3 via the USB-C ports. This too has been a long process, with our approach needing to change significantly from what we had originally developed downstream
https://asahilinux.org/2025/12/progress-report-6-18/
Stop buying Apple laptops to run Linux.
https://rosenzweig.io/blog/asahi-gpu-part-n.html
https://lore.kernel.org/asahi/20251215-macsmc-subdevs-v6-4-0...
Asahi Lina, who also did tons of work on the Asahi Linux GPU development, also quit as she doesn't feel safe doing Linux GPU work anymore [1].
[0] https://marcan.st/2025/02/resigning-as-asahi-linux-project-l...
[1] https://asahilina.net/luna-abuse/
They are more common than you would think. There just is not many willing to work on a shoe string salary.
I blame Apple on pushing out new models every year. I don’t get why it does that. A M1 is perfectly fine after a few years but Apple treats it like an iPhone. I think one new model every 2-3 years is good enough.
That’s why I don’t like it as a consumer. If they keep producing M1 and M2 I’d assume we can get better prices because the total quantity would be much larger. Sure it is probably better for Apple to move forward quickly though.
In general, I don't think it's reasonable to worry that Apple's products aren't thoroughly achieving economies of scale. The less expensive consumer-oriented products are extremely popular, various components are shared across product lines (eg. the same chip being used in Macs and iPads) and across multiple generations (except for the SoC itself, obviously), and Apple rather famously has a well-run supply chain.
From a strategic perspective, it seems likely that Apple's long history of annual iteration on their processors in the iPhone and their now well-established pattern of updating the Mac chips less often but still frequently is part of how Apple's chips have been so successful. Annual(ish) chip updates with small incremental improvements compounds over the years. Compare Apple's past decade of chip progress against Intel's troubled past decade of infrequent technology updates (when you look past the incrementing of the branding), uneven improvements and some outright regressions in important performance metrics.
Why would this be true? An M5 MacBook Air today costs the same as an M1 MacBook Air cost in 2020 or whenever they released it, and is substantially more performant. Your dollar per performance is already better.
If they kept selling the same old stuff, then you spread production across multiple different nodes and the pricing would be inherently worse.
I've got a few ideas