This is an outstanding blog post. Initially, the title did little to captivate me, but the blog post was so well written that I got nerd-sniped. Who knew this little adapter was so fascinating! I wonder if the manufacturer is buying the Mellanox cards used from data center tear-downs. The author claims they can be had for only 20 USD online. That seems too good to be true!
I cannot find anything for less than 285 USD. The blog post gave a price of 174 USD. I have no reason to disbelieve the author, but a bummer to see the current price is 110 USD more!
I believe the author is talking about the OCP (2.0) network card itself, that these adapters internally. The OCP nics are quite cheap compared to pcie - here’s 100GBE for 100!
https://ebay.us/m/HMQAph
This 100GbE card is an OCP 2.0 type 2 adapter, which will _probably_ not work with the PX PCB since that NIC has two of these mezzanine connectors, and PX only one.
What also may not work are Dell rNDC cards. They look like they have OCP 2.0 type 1 connectors, but may not quite fit (please correct me if I'm wrong). They do however have a nice cooling solution, which could be retrofitted to one of the OCP 2.0 cards.
I've also ordered a Chelsio T6225-OCP cards out of curiosity. These should fit in the PX adapter but require a 3rd-party driver on macOS (which then supports jumbo frames, etc.)
What also fits physically is a Broadcom BCM957304M3040C, but there are no drivers on macOS, and I couldn't get the firmware updated on Linux either.
That’s a good point to note! I think the stacking height would matter, but in theory the single connector is still 8x pcie and should link without the upper 8x lanes connected.
The linux implementation is quite poor. Among other issues, your answer is linux treats it as a TCP/IP link- so packets don't have any offloading for checksums, etc. it's also incomplete (ex- you have to physically unplug and replug the cord every time one side loses link- even a reboot).
This is my firsthand experence trying to get some tablet motherboards to link up and work as a proxmox cluster w/ TB3 as the link between nodes.
I have a 3 node proxmox setup on MS-01s using a 25G Thunderbolt ring for Ceph, and indeed it took a lot of hoops to get it working correctly and reliably. I did manage to get it such that nodes can go up and down without needing to unplug anything, and the dynamic routing works if a node disappears. Performance is pretty good, with a more realistic 20ish gbit/sec.
Ha! Been running these for years on both linux and windows (on lenovo x1 laptops). Using cheap chinese thunderbolt-to-nvme adapters + nvme-to-pcie boards + mellanox cx4 cards (recently got one cx5 and a solarflare x2).
If you don’t mind me asking, what are you using these for? Saturating these seems like it would have reasonably few workloads outside of like cdn or multi-tenant scenarios. Curious what my lack of imagination is hiding here.
Officially: to access NAS, get raw market-data files (tens to hundreds of gigabytes a day), not needed on laptop every day, but only once in a while to fix or analyze something.
Really: because I can, and it is fun. I upgraded my home lan to 10G, because used 10G hardware is cheap (and now 25G enters the same price range).
"because I can, and it is fun." The best answer! I am most of the way done with upgrading most of my homelab to 100G from 10G, but there really isn't a practical reason for it. 100G has dropped in price so much as datacenters are all about 400/800G now.
I do media production, and sometimes move giant files (like ggufs) around my network, so 25 Gbps is more useful than 10 Gbps, if it's no too expensive.
I'm surprised you are only getting 20gbit/s. I did not expect PCIe to be be the limiting factor here. I've got a 100gbit cx4 card currently in a PCIe3 X4 slot (for reasons, don't judge) and it easily maxes that out. I would have expected the 25g cx4 cards to be at least able to get everything out of it. RDMA is required to achieve that in a useful way though.
Thunderbolt is basically external PCIe, so this is not so surprising. High speed NICs do consume a relatively large amount of power. I have a feeling I've seen that logo on the board before.
I don't know how to measure the direct power impact on a MacBook Pro (since it's got a battery), but the typical power consumption of these cards is 9 W, not much more than Aquantia 10 GBit cards.
Also, if you remember where you saw that logo, please let me know!
JFYI, for measuring power draw, you might be able to use `macmon`[0] to see the total system power consumption. The values reported by the internal current sensor seem to be quite accurate.
Speaking of hardware, the RTL8159 (10Gbps) hit the market late last year and is said to consume only about 2–3W. It apparently runs very cool compared to older chips. (Though it would need to be bonded to reach 25Gbps ;-)
I got me one of these adapters (RTL8127AF TXA403, with SFP+ cage); I haven't properly benchmarked it yet.
There's no driver support on macOS, and for Linux you'd need a bleeding edge kernel. Just trying to physically connect it (along with a connected SFP28 transceiver) to my Mac's Thunderbolt port using an external PCIe-to-TB adapter, macmon tells me a power draw of around 4.3 W, so it's not significantly less for half the bandwidth, but the card doesn't get hot at all.
Plus 1-2.5w per active cable. You need the heatsinks as the cx4 cards expect active airflow, and active transceivers as well.
I have a 10gbit dual port card in a Lenovo mini pc. There is no normal way to get any heat out of there so I put a 12v small radial fan in there as support. It works great at 5v: silent and cool. It is a fan though so might not suit your purpose.
I used to have an SFP28 Mellanox card in my home server, but went back to a simple 2.5G Ethernet port for the LAN side. The Mellanox card ran hot and needed an extra fan near it to dissipate the heat. It was cool but there was no real benefit other than occasionally when transferring some large files.
Until motherboards include SFP ports it's probably not worth the effort at all in home setting; external adaptors like the one presented here are unreliable and add several ms of latency.
Yep, these cards need a fan (or any kind of directed air flow).
Where did you get "several ms of latency" figure from? I have not measured external card, but may be I should do it... Because cards themselves have latency in range of microseconds, not millis.
I haven't tested this particular Thunderbolt SFP adapter, but my experience with a TP-Link 1Gbps USB adapter is that it adds about 4ms of latency. Far from being unusable and similar to WiFi perhaps, but worse than PCIe cards that should be <1ms.
For reference, I'm seeing pings from my Mac to my Linux boxes (Lenovo Tiny5) at well under 1ms, not much worse than between them directly. But yeah, your mileage may vary.
Any idea why ethernet stagnated in terms of speed? There was a time it was so much faster compared to usb. Now even wifi seems to be faster.
Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
The new low-power Realtek chipsets will definitely push 10 GbE forward because the chipset won't be much more expensive to integrate and run than the 2.5Gbps packages.
It all comes down to performance per Watt, the availability of cheap switching gear, and the actual utility in an office / home environment.
For 10 Gbps, cabling can be an issue. Existing "RJ45"-style Cat 6 cables could still work, but maybe not all of them.
Higher speeds will most likely demand a switch to fiber (for anything longer than a few meters) or Twinax DAC (for inter-device connects). Since Wifi already provides higher speeds, one may be inclined to upgrade just for that (because at some point, Wireless becomes Wired, too).
That change comes with the complexity of running new cabling, fiber splicing, worrying about different connectors (SFP+, SFP28, SFP56, QSFP28, ...), incompatible transceiver certifications, vendor lock-in, etc. Not a problem in the datacenter, but try to explain this to a layman.
Lastly, without a faster pipe to the Internet, what can you do other than NAS and AI? The computers will still get faster chips but most folks won't be able to make use of the bandwidth because they're still stuck on 1Gbps Internet or less.
But that will change. Swiss Init7 has shown that 25GBps Internet at home is not only feasible but also affordable, and China seems to be adding lots of 10G, and fiber in general.
We have 400Gbe which is certainly faster than USB.. but;
On consumer devices, I think part of the issue is that we’re still wedded to four-pair twisted copper as the physical medium. That worked well for Gigabit Ethernet, but once you push to 5 or 10 Gb/s it becomes inherently expensive. Twisted pair is simply a poor medium at those data rates, so you end up needing a large amount of complex silicon to compensate for attenuation, crosstalk, and noise.
That's doable but the double whammy is that most people use the network for 'internet' and 1G is simply more than enough, 10G therefore becomes quite niche so there's no enormous volume to overcome the inherent issues at low cost.
Wireless happened, I'd think. People started using wifi and cellular data for everything, so applications had to adapt to this lowest common denominator, and consumer broadband demand for faster-than-wifi speeds isn't there. Plus operators put all their money into cellular infra leaving no money to update broadband infra.
> Any idea why ethernet stagnated in terms of speed? There was a time it was so much faster compared to usb. Now even wifi seems to be faster.
Practically spoken, a lot of the transfer speed advertised by wifi is marketing hogwash barely backed by reality, especially in congested environments.
> Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
For both laptops and desktops, PCI lanes. Intel doesn't provide many lanes, so manufacturers don't want to waste valuable lanes permanently for capabilities most people don't ever need.
For laptops in particular, power draw. The faster you push copper, the more power you need. And laptops have even less PCIe lanes available to waste.
For desktops, it's a question of market demand. Again - most applications don't need ultra high transfer rate, most household connectivity is DSL and (G)PON so 1 GBit/s is enough to max out the uplink. And those few users that do need higher transfer rates can always install a PCIe card, especially as there is a multitude of different options to provide high bandwidth connectivity.
That is really cool to read. And here I am, still running my home network on a measly 1Gbit Ethernet. I considered upgrading, but the equipment power consumption even when idle makes it an expensive proposition to consider just for fun.
Yeah, it's because the network card adapter's heatsink is sandwiched between two PCBs. Not great, not terrible, works for me.
The placement is mostly determined by the design of the OCP 2.0 connector. OCP 3.0 has a connector at the short edge of the card, which allows exposing/extending the heat sink directly to the outer case.
If somebody has the talent, designing a Thunderbolt 5 adapter for OCP 3.0 cards could be a worthwhile project.
A Flex PCB connecting the OCP2 connector would allow to put the converter board behind the NIC board, allowing the NIC board to be exposed to the aluminum case to use the case itself as a heatsink (would need a split case so the NIC board can be screwed to one side of the case, pressing the main chip against it via a thermal pad).
As a stop-gap, I'd see if there was any way to get airflow into the case - I'd expect even a tiny fan would do much more than those two large heatsinks stuck onto the case (since the case itself has no thermal connection to the chip heatsink).
Does this manufacturer's practice pattern of repackaging data center components (e.g. Mellanox) imply any up and coming product creation opportunities?
Muscle memory for folks who have been doing it since before -i was an option. I still instinctively type `sudo su -` because it worked consistently on older deployments. When you have to operate a fleet of varying ages and distributions, you tend to quickly learn [if only out of frustration] what works everywhere vs only on the newer stuff.
`sudo su - <user>` also seems easier for me to type than `sudo -i -u <user>`
Thanks! Have you tried the boltctl/rescan setup I mentioned in the post? It should get you going, as long as your Thunderbolt/USB4 setup is correct.
If you're using an adapter card to add Thunderbolt functionality, then your mainboard needs to support that, and the card must be connected to a PCIe bus that's wired to the Intel PCH, not to the CPU.
Yes, rescan, re-enroll too. But it still shows as disconnected.
I don't know if the firmware is completely incompatible, but it is weird that under windows works and in Linux doesn't
It's embedded in the laptop (a HP ZBook from work). Disconnected as in network. Laptop charges, but signal doesn't work. With Thunderbolt 3 devices, it works. (The card itself is T4).
I don't know about the Ethernet part but it bothers me that even wifi has become faster than the wired USB port on our phones.
All I want to do is copy over all the photos and videos from my phone to my computer but I have to baby sit the process and think whether I want to skip or retry a failed copy. And it is so slow. USB 2.0 slow. I guess everybody has given up on the idea of saving their photos and videos over USB?
Wifi is fast but the latency is terrible and the reliability is even worse. It can go up and down like a yo-yo. USB is far more predictable even if it is a bit slower.
I have a cluster of 4 RPi Zero Ws and network reliability is not great. Since it is for the chaos, it’s fine, but it’s very common to have a node be offline at any given time.
Even worse, the control plane is exposed, but for something that runs 3 Hercules mainframe emulation and two Altairs with MP/M, it’s fine.
If the photos on the phone are visible as files on a mounted filesystem, you can use rsync to copy them. If the connection drops but recovers by itself, you can put rsync inside a while true loop until it’s doing nothing.
I’m using Dropbox for syncing photos from phone to Linux laptop, and mounting the SDcard locally for cameras, so this is a guess.
I feel like this is an artifact from the late 2010s when the talk was of removing the port completely from phones, where that was being touted alongside swapping speakers with haptic screen audio as a way to make them completely waterproof.
As wireless charging never quite reached the level hoped – see AirPower – and Google/Apple seemingly bought and never did anything with a bunch of haptic audio startups, I figure that idea died....but they never cared enough to make sure the USB port remained top end.
I'd usually be against losing ports and user serviceable stuff but if the device could actually be properly sealed up (ie no speakers, mics, charge ports, etc) that would be legitimately useful.
> but I have to baby sit the process and think whether I want to skip or retry a failed copy
Do you import originals or do you have the "most compatible" setting turned on?
I always assumed apple simply hated people that use windows/linux desktops so the occasional broken file was caused by the driver being sort-of working and if people complain, well, they can fuck off and pay for icloud or a mac. After upgrading to 15 pro which has 10 gbps usb-c it still took forever to import photos and the occasional broken photos kept happening, and after some research it turns out that the speed was limited by the phone converting the .heic originals into .jpg when transferring to a desktop. Not only does it limit the speed, it also degrades the quality of the photos and deletes a bunch of metadata.
After changing the setting to export original files the transfer is much faster and I haven’t had a single broken file / video. The files are also higher quality and lower filesize, although .heic is fairly computationally-demanding.
Idk about Android but I suspect it might have a similar behavior
Wouldn’t this be useful for clustering Macs over TB5? Wasn’t the maximum bandwidth over USB-cables 5Gbps? With a switch, you could cluster more than just 4 Mac Studios and have a couple terabytes for very large models to work with.
I was hoping somebody would suggest that (and eventually try it out).
With TB5, and deep pockets, you might probably also benchmark it against a setup with dedicated TB5 enclosures (e.g., Mercury Helios 5S).
TB5 has PCIe 4.0 x4 instead of PCIe 3.0 x4 -- that should give you 50 GbE half-duplex instead of 25 GbE. You would need a different network card though (ConnectX-5, for example).
Pragmatically though, you could also aggregate (bond) multiple 25 GbE network card ports (with Mac Studio, you have up to 6 Thunderbolt buses, so more than enough to saturate a 100GbE connection).
This is an outstanding blog post. Initially, the title did little to captivate me, but the blog post was so well written that I got nerd-sniped. Who knew this little adapter was so fascinating! I wonder if the manufacturer is buying the Mellanox cards used from data center tear-downs. The author claims they can be had for only 20 USD online. That seems too good to be true!
Small thing: I just checked Amazon.com: https://www.amazon.com/s?k=thunderbolt+25G&crid=2RHL4ZJL96Z9...
I cannot find anything for less than 285 USD. The blog post gave a price of 174 USD. I have no reason to disbelieve the author, but a bummer to see the current price is 110 USD more!
Thank you!
I think, tragically, the blog post has caused this price increase.
The offers on Amazon are most likely all drop shippers trying to gauge a price that works for them.
You might have better luck ordering directly from China for a fraction of the price: https://detail.1688.com/offer/836680468489.html
I saw the blog post last week and immediately bought the last one on that Amazon listing for the original price... hopefully they restock soon!
I'm going to try a couple other fan assisted cooling options, as I'd like to keep the setup reasonably compact.
I just ran fiber to my desk and I have a more expensive QNAP unit that does 10G SFP+, but this will let me max out the connection to my NAS.
Be sure to test this adapter on iPad Pro, just for kicks (yes it works!)
Although I managed to panic the kernel a couple of times without the extra heatsinks on...
I believe the author is talking about the OCP (2.0) network card itself, that these adapters internally. The OCP nics are quite cheap compared to pcie - here’s 100GBE for 100! https://ebay.us/m/HMQAph
This 100GbE card is an OCP 2.0 type 2 adapter, which will _probably_ not work with the PX PCB since that NIC has two of these mezzanine connectors, and PX only one.
What also may not work are Dell rNDC cards. They look like they have OCP 2.0 type 1 connectors, but may not quite fit (please correct me if I'm wrong). They do however have a nice cooling solution, which could be retrofitted to one of the OCP 2.0 cards.
I've also ordered a Chelsio T6225-OCP cards out of curiosity. These should fit in the PX adapter but require a 3rd-party driver on macOS (which then supports jumbo frames, etc.)
What also fits physically is a Broadcom BCM957304M3040C, but there are no drivers on macOS, and I couldn't get the firmware updated on Linux either.
That’s a good point to note! I think the stacking height would matter, but in theory the single connector is still 8x pcie and should link without the upper 8x lanes connected.
Spec for reference, I’m not 100% sure. https://docs.nvidia.com/nvidia-connectx-5-ethernet-adapter-c...
you can get a 100Gb normal pcie card like a MCX416A for less than $100 if you're willing to flash them
$285 is still an AMAZING price for 25GbE ethernet over TB4. I paid $200 for the Sonnet TB4 10GbE adapter.
Note that you can do point-to-point network links directly with thunderbolt (and usb4).
https://support.apple.com/guide/mac-help/ip-thunderbolt-conn... etc
Yes! However, I got around 15 Gbps with a Thunderbolt-only setup (TB3/TB4) = only 75% of the Ethernet setup.
You'd also mostly be limited to short cables (1-2m) and a ring topology.
Any idea why that would be the case?
The linux implementation is quite poor. Among other issues, your answer is linux treats it as a TCP/IP link- so packets don't have any offloading for checksums, etc. it's also incomplete (ex- you have to physically unplug and replug the cord every time one side loses link- even a reboot).
This is my firsthand experence trying to get some tablet motherboards to link up and work as a proxmox cluster w/ TB3 as the link between nodes.
I have a 3 node proxmox setup on MS-01s using a 25G Thunderbolt ring for Ceph, and indeed it took a lot of hoops to get it working correctly and reliably. I did manage to get it such that nodes can go up and down without needing to unplug anything, and the dynamic routing works if a node disappears. Performance is pretty good, with a more realistic 20ish gbit/sec.
Ha! Been running these for years on both linux and windows (on lenovo x1 laptops). Using cheap chinese thunderbolt-to-nvme adapters + nvme-to-pcie boards + mellanox cx4 cards (recently got one cx5 and a solarflare x2).
Pic of a previous cx3 (10 gig on tb3) setup: https://habrastorage.org/r/w780/getpro/habr/upload_files/d3c...
10gig can saturate full speed, 25G in my experience rarely reaches same 20G as the author observed.
If you don’t mind me asking, what are you using these for? Saturating these seems like it would have reasonably few workloads outside of like cdn or multi-tenant scenarios. Curious what my lack of imagination is hiding here.
Officially: to access NAS, get raw market-data files (tens to hundreds of gigabytes a day), not needed on laptop every day, but only once in a while to fix or analyze something.
Really: because I can, and it is fun. I upgraded my home lan to 10G, because used 10G hardware is cheap (and now 25G enters the same price range).
"because I can, and it is fun." The best answer! I am most of the way done with upgrading most of my homelab to 100G from 10G, but there really isn't a practical reason for it. 100G has dropped in price so much as datacenters are all about 400/800G now.
Nice! Cool to hear from a fellow admirer of overkill-lan setups ;)
Which cards do you prefer for 100G, and what is the situation with dacs/optics?
I do media production, and sometimes move giant files (like ggufs) around my network, so 25 Gbps is more useful than 10 Gbps, if it's no too expensive.
I'm surprised you are only getting 20gbit/s. I did not expect PCIe to be be the limiting factor here. I've got a 100gbit cx4 card currently in a PCIe3 X4 slot (for reasons, don't judge) and it easily maxes that out. I would have expected the 25g cx4 cards to be at least able to get everything out of it. RDMA is required to achieve that in a useful way though.
Edit: forgot is isn't "true" PCIe but tunneled.
The limitation is Thunderbolt (32 Gbps theoretical limit for PCIe 3 tunneling).
Thunderbolt is basically external PCIe, so this is not so surprising. High speed NICs do consume a relatively large amount of power. I have a feeling I've seen that logo on the board before.
I don't know how to measure the direct power impact on a MacBook Pro (since it's got a battery), but the typical power consumption of these cards is 9 W, not much more than Aquantia 10 GBit cards.
Also, if you remember where you saw that logo, please let me know!
JFYI, for measuring power draw, you might be able to use `macmon`[0] to see the total system power consumption. The values reported by the internal current sensor seem to be quite accurate.
[0] https://github.com/vladkens/macmon
Speaking of hardware, the RTL8159 (10Gbps) hit the market late last year and is said to consume only about 2–3W. It apparently runs very cool compared to older chips. (Though it would need to be bonded to reach 25Gbps ;-)
I got me one of these adapters (RTL8127AF TXA403, with SFP+ cage); I haven't properly benchmarked it yet.
There's no driver support on macOS, and for Linux you'd need a bleeding edge kernel. Just trying to physically connect it (along with a connected SFP28 transceiver) to my Mac's Thunderbolt port using an external PCIe-to-TB adapter, macmon tells me a power draw of around 4.3 W, so it's not significantly less for half the bandwidth, but the card doesn't get hot at all.
Very nice tip, thank you!
I measure around +11W idle. While running a speed test, I read ca. +15W.
Thanks for the measurements! 15W under load definitely justifies those massive heatsinks.
I’m looking forward to your writeup on the RTL8127AF as well. Your blog is awesome!
Plus 1-2.5w per active cable. You need the heatsinks as the cx4 cards expect active airflow, and active transceivers as well.
I have a 10gbit dual port card in a Lenovo mini pc. There is no normal way to get any heat out of there so I put a 12v small radial fan in there as support. It works great at 5v: silent and cool. It is a fan though so might not suit your purpose.
Do you mean active Thunderbolt cable? Short Thunderbolt cables (0.8m) are passive.
https://www.reddit.com/r/UsbCHardware/comments/y5uokj/commen...
The PCI-E logo or the “octopus in a chip” logo? I’m more interested in the latter.
I used to have an SFP28 Mellanox card in my home server, but went back to a simple 2.5G Ethernet port for the LAN side. The Mellanox card ran hot and needed an extra fan near it to dissipate the heat. It was cool but there was no real benefit other than occasionally when transferring some large files.
Until motherboards include SFP ports it's probably not worth the effort at all in home setting; external adaptors like the one presented here are unreliable and add several ms of latency.
> Until motherboards include SFP ports […]
A micro-ATX motherboard with on-board 2xSFP28 (Intel E810):
* https://download-2.msi.com/archive/mnu_exe/server/D3052-data...
* https://www.techradar.com/pro/this-amd-motherboard-has-a-uni...
Yep, these cards need a fan (or any kind of directed air flow).
Where did you get "several ms of latency" figure from? I have not measured external card, but may be I should do it... Because cards themselves have latency in range of microseconds, not millis.
I haven't tested this particular Thunderbolt SFP adapter, but my experience with a TP-Link 1Gbps USB adapter is that it adds about 4ms of latency. Far from being unusable and similar to WiFi perhaps, but worse than PCIe cards that should be <1ms.
For reference, I'm seeing pings from my Mac to my Linux boxes (Lenovo Tiny5) at well under 1ms, not much worse than between them directly. But yeah, your mileage may vary.
Any idea why ethernet stagnated in terms of speed? There was a time it was so much faster compared to usb. Now even wifi seems to be faster.
Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
The new low-power Realtek chipsets will definitely push 10 GbE forward because the chipset won't be much more expensive to integrate and run than the 2.5Gbps packages.
It all comes down to performance per Watt, the availability of cheap switching gear, and the actual utility in an office / home environment.
For 10 Gbps, cabling can be an issue. Existing "RJ45"-style Cat 6 cables could still work, but maybe not all of them.
Higher speeds will most likely demand a switch to fiber (for anything longer than a few meters) or Twinax DAC (for inter-device connects). Since Wifi already provides higher speeds, one may be inclined to upgrade just for that (because at some point, Wireless becomes Wired, too).
That change comes with the complexity of running new cabling, fiber splicing, worrying about different connectors (SFP+, SFP28, SFP56, QSFP28, ...), incompatible transceiver certifications, vendor lock-in, etc. Not a problem in the datacenter, but try to explain this to a layman.
Lastly, without a faster pipe to the Internet, what can you do other than NAS and AI? The computers will still get faster chips but most folks won't be able to make use of the bandwidth because they're still stuck on 1Gbps Internet or less.
But that will change. Swiss Init7 has shown that 25GBps Internet at home is not only feasible but also affordable, and China seems to be adding lots of 10G, and fiber in general.
Fun times ahead.
We have 400Gbe which is certainly faster than USB.. but;
On consumer devices, I think part of the issue is that we’re still wedded to four-pair twisted copper as the physical medium. That worked well for Gigabit Ethernet, but once you push to 5 or 10 Gb/s it becomes inherently expensive. Twisted pair is simply a poor medium at those data rates, so you end up needing a large amount of complex silicon to compensate for attenuation, crosstalk, and noise.
That's doable but the double whammy is that most people use the network for 'internet' and 1G is simply more than enough, 10G therefore becomes quite niche so there's no enormous volume to overcome the inherent issues at low cost.
Wireless happened, I'd think. People started using wifi and cellular data for everything, so applications had to adapt to this lowest common denominator, and consumer broadband demand for faster-than-wifi speeds isn't there. Plus operators put all their money into cellular infra leaving no money to update broadband infra.
> Any idea why ethernet stagnated in terms of speed? There was a time it was so much faster compared to usb. Now even wifi seems to be faster.
Practically spoken, a lot of the transfer speed advertised by wifi is marketing hogwash barely backed by reality, especially in congested environments.
> Sure one can buy nice ethernet cards and cables, but the reality is that if you grab a random laptop/desktop from best buy and a cable, you are looking at best at a 2.5Gb/s speed.
For both laptops and desktops, PCI lanes. Intel doesn't provide many lanes, so manufacturers don't want to waste valuable lanes permanently for capabilities most people don't ever need.
For laptops in particular, power draw. The faster you push copper, the more power you need. And laptops have even less PCIe lanes available to waste.
For desktops, it's a question of market demand. Again - most applications don't need ultra high transfer rate, most household connectivity is DSL and (G)PON so 1 GBit/s is enough to max out the uplink. And those few users that do need higher transfer rates can always install a PCIe card, especially as there is a multitude of different options to provide high bandwidth connectivity.
That is really cool to read. And here I am, still running my home network on a measly 1Gbit Ethernet. I considered upgrading, but the equipment power consumption even when idle makes it an expensive proposition to consider just for fun.
Neat, but the thermal design is absolutely terrible. Sticking that heatsink inside the aluminum case without any air circulation is awful.
Yeah, it's because the network card adapter's heatsink is sandwiched between two PCBs. Not great, not terrible, works for me.
The placement is mostly determined by the design of the OCP 2.0 connector. OCP 3.0 has a connector at the short edge of the card, which allows exposing/extending the heat sink directly to the outer case.
If somebody has the talent, designing a Thunderbolt 5 adapter for OCP 3.0 cards could be a worthwhile project.
A Flex PCB connecting the OCP2 connector would allow to put the converter board behind the NIC board, allowing the NIC board to be exposed to the aluminum case to use the case itself as a heatsink (would need a split case so the NIC board can be screwed to one side of the case, pressing the main chip against it via a thermal pad).
As a stop-gap, I'd see if there was any way to get airflow into the case - I'd expect even a tiny fan would do much more than those two large heatsinks stuck onto the case (since the case itself has no thermal connection to the chip heatsink).
My goal was to get a fanless setup (for a quiet office).
If that's not a requirement just get the Raiden Digit Light One, which does have a fan (and otherwise the same network card).
If I could design an adapter PCB myself, I would go straight to OCP 3.0, which allows for a much simpler construction, and TB5 speeds.
Alternatively, there are DELL CX422A rNDC cards (R887V) that appear to have an OCP 2.0 connector but a better heatsink design.
I'd be more worried about cooling the transceivers properly.
My optical transceiver gets to around 52 °C (measured via IR camera), well below its design limit, so that's not bad.
If truly concerned, one could use SFP28 to SFP28 cage adapters to have the heat outside the case, and slap on some extra heatsinks there.
Does this manufacturer's practice pattern of repackaging data center components (e.g. Mellanox) imply any up and coming product creation opportunities?
nitpicking but why would someone type `sudo su` vs `sudo -i`
Muscle memory for folks who have been doing it since before -i was an option. I still instinctively type `sudo su -` because it worked consistently on older deployments. When you have to operate a fleet of varying ages and distributions, you tend to quickly learn [if only out of frustration] what works everywhere vs only on the newer stuff.
`sudo su - <user>` also seems easier for me to type than `sudo -i -u <user>`
I've mostly only ever seen `sudo su` in tutorials, so someone who's only familiar with the command through those is one possible reason why.
I still have issues under Linux (Kernel 6.14) and Thinderboldt 4 docking stations. The simply don't get recognised.
But this is a cool solution
Thanks! Have you tried the boltctl/rescan setup I mentioned in the post? It should get you going, as long as your Thunderbolt/USB4 setup is correct.
If you're using an adapter card to add Thunderbolt functionality, then your mainboard needs to support that, and the card must be connected to a PCIe bus that's wired to the Intel PCH, not to the CPU.
Yes, rescan, re-enroll too. But it still shows as disconnected. I don't know if the firmware is completely incompatible, but it is weird that under windows works and in Linux doesn't
Disconnected as in "network"? What PCIe card do you use? Can you update the firmware (maybe from Windows)?
Also check the BIOS settings (try setting TB security to "No Security" or "User Authorization")
Some OEM Mellanox cards can be cross-flashed to NVIDIA's stock firmware, maybe that's also relevant.
It's embedded in the laptop (a HP ZBook from work). Disconnected as in network. Laptop charges, but signal doesn't work. With Thunderbolt 3 devices, it works. (The card itself is T4).
Now I just have to contrive the circumstances where this is useful to me. :)
I don't know about the Ethernet part but it bothers me that even wifi has become faster than the wired USB port on our phones.
All I want to do is copy over all the photos and videos from my phone to my computer but I have to baby sit the process and think whether I want to skip or retry a failed copy. And it is so slow. USB 2.0 slow. I guess everybody has given up on the idea of saving their photos and videos over USB?
> USB 2.0 slow
Many phones indeed only support USB 2.0. For example the base iPhone 17. The Pro does support USB 3.2, however.
> I guess everybody has given up on the idea of saving their photos and videos over USB?
Correct.
Wifi is fast but the latency is terrible and the reliability is even worse. It can go up and down like a yo-yo. USB is far more predictable even if it is a bit slower.
I have a cluster of 4 RPi Zero Ws and network reliability is not great. Since it is for the chaos, it’s fine, but it’s very common to have a node be offline at any given time.
Even worse, the control plane is exposed, but for something that runs 3 Hercules mainframe emulation and two Altairs with MP/M, it’s fine.
I have a bunch of HA wifi connected sensors, I see them drop off and reconnect all the time it is most annoying.
If the photos on the phone are visible as files on a mounted filesystem, you can use rsync to copy them. If the connection drops but recovers by itself, you can put rsync inside a while true loop until it’s doing nothing.
I’m using Dropbox for syncing photos from phone to Linux laptop, and mounting the SDcard locally for cameras, so this is a guess.
Why don't you get a phone with 3.0+ USB?
My last two phones in the last 4 years had at least USB 3.1
I feel like this is an artifact from the late 2010s when the talk was of removing the port completely from phones, where that was being touted alongside swapping speakers with haptic screen audio as a way to make them completely waterproof.
As wireless charging never quite reached the level hoped – see AirPower – and Google/Apple seemingly bought and never did anything with a bunch of haptic audio startups, I figure that idea died....but they never cared enough to make sure the USB port remained top end.
I'd usually be against losing ports and user serviceable stuff but if the device could actually be properly sealed up (ie no speakers, mics, charge ports, etc) that would be legitimately useful.
> given up on the idea of saving their photos and videos over USB?
Until USB has monthly service business to compete with cloud storage revenue.
> but I have to baby sit the process and think whether I want to skip or retry a failed copy
Do you import originals or do you have the "most compatible" setting turned on?
I always assumed apple simply hated people that use windows/linux desktops so the occasional broken file was caused by the driver being sort-of working and if people complain, well, they can fuck off and pay for icloud or a mac. After upgrading to 15 pro which has 10 gbps usb-c it still took forever to import photos and the occasional broken photos kept happening, and after some research it turns out that the speed was limited by the phone converting the .heic originals into .jpg when transferring to a desktop. Not only does it limit the speed, it also degrades the quality of the photos and deletes a bunch of metadata.
After changing the setting to export original files the transfer is much faster and I haven’t had a single broken file / video. The files are also higher quality and lower filesize, although .heic is fairly computationally-demanding.
Idk about Android but I suspect it might have a similar behavior
Wouldn’t this be useful for clustering Macs over TB5? Wasn’t the maximum bandwidth over USB-cables 5Gbps? With a switch, you could cluster more than just 4 Mac Studios and have a couple terabytes for very large models to work with.
I was hoping somebody would suggest that (and eventually try it out).
With TB5, and deep pockets, you might probably also benchmark it against a setup with dedicated TB5 enclosures (e.g., Mercury Helios 5S).
TB5 has PCIe 4.0 x4 instead of PCIe 3.0 x4 -- that should give you 50 GbE half-duplex instead of 25 GbE. You would need a different network card though (ConnectX-5, for example).
Pragmatically though, you could also aggregate (bond) multiple 25 GbE network card ports (with Mac Studio, you have up to 6 Thunderbolt buses, so more than enough to saturate a 100GbE connection).
Too bad Jeff Geerling returned his Mac Studios to Apple. Would be lovely to see how 5x faster RDMA impacts the performance.
25 Gbps isn't all that much. It would be good, but would be below the 40+ Gbps I was getting on the TB5 ring network.
I think where it would show more significant speed up is on the AMD Strix Halo cluster.
Except I haven't been able to get RDMA over Thunderbolt on there to work, so it'd be apples to oranges comparing ConnectX to Thunderbolt on Linux.
I recently did a complete disk backup/clone which only took 15 minutes instead of hours. Maxed the SSD which was being backed up at about 2.5GB/s.
rsync...grsync...a solution for broken partial batch transfers since forever
Pretty much anywhere you have networked storage? Gigabit is about on-par with pre-sata ATA133.
Would be useful if I had to debug my internet link and I only had a laptop.
It also works on iPad Pro :)
Remote Time Machine backups are snappier than ever before :)
Porn?
What kind of porn requires 25 gigabits ?
As a guess, large-scale volumetric or photogrammetric "datasets" could be difficult to stream over lesser interconnects.
A lot of porn.
> reduces temperatures by at least 15 Kelvin, bringing the ambient enclosure temperature below 40 °C,
I had to do a double-take when it mentioned Kelvin since That is physically impossible.
Isn't "reduces temperatures by 15 Kelvin" the same as "reduces temperatures by 15 Celsius"?
Yes, Kelvin is only a linear offset from Celsius. (273.15 for anyone who doesn’t already know).
It’s a little bit funny/coy to use it mixed with Celsius.
reduces temperatures by at least 15 Kelvin == the same as reduces temperatures by at least 15 Celcius.
It 'reduces it by' ... not reduces it TO