P2P still works great. Unfortunately the well has been poisoned for alot of people, as it's widely assumed that "torrenting" of any sort must be illegal, even if it's for perfectly legitimate purposes.
nirui 58 days ago [-]
Sort of reminded me the whole social media saga. The platforms should connect people around the globe and bring the age of understanding, love and rainbows. But instead, it turns out people really love rants and shit.
Yeah, you might argue that some bad actors took advantage of the tech and turned it into the endless of darkness. But I think, the fundamental reason here is simply lack of creativity.
You give people a protocol that is really good at distributing (large) files, guess how they going to use it? The cheapest use cases comes on top, of course. Make something both productive and beneficial is hard man!
IggleSniggle 58 days ago [-]
Shout out to Resilio Sync, which is “just BitTorrent”, but has been a great alternative to other sync platforms for me doing geographically distributed self-hosting.
tartoran 58 days ago [-]
Just wanted to say this. Still using p2p in 2022. Yes, it’s less frequent than before due to ubiquitous content online but it’s still very reliable.
DoingIsLearning 58 days ago [-]
Where does one go and search for magnet URIs in 2022?
(Not sure if this is a copyright sensitve question but curious)
> it's widely assumed that "torrenting" of any sort must be illegal, even if it's for perfectly legitimate purposes.
My understanding is that many people consider piracy illegal in the "corporations paid to make it illegal because it threatened their business model" sense
throwawaaarrgh 58 days ago [-]
ISPs killed P2P. Even if you have the upstream to seed all day, they'll just rate-limit you. There is no monetary incentive for them to allow P2P anyway.
That said.... Gnutella/Gnutella2, eDonkey2000, and others were objectively crappy designs, but still worked very well at one use case: distributing rare files. The thing is, if you have a stable high-speed mirror, the files aren't rare anymore. Hence the only files that are rare are the illegal ones, or ones that can't find a mirror to host them. There's just not much point to P2P. Sometimes there's good reason to distribute illegal files, like getting past state censorship. But the whole world isn't going to adopt a wonky solution for a rare use-case. Hence P2P has not & will not take off.
Reventlov 58 days ago [-]
> ISPs killed P2P. Even if you have the upstream to seed all day, they'll just rate-limit you. There is no monetary incentive for them to allow P2P anyway.
I don't know where you live, but in the EU we tend to take the net neutrality seriously, and that does not happen.
5e92cb50239222b 58 days ago [-]
Same here, but for a different reason. An interesting situation happened about ten years ago.
The largest BitTorrent tracker in the country was shut down by authorities for obvious reasons: Linux ISOs were far from the most popular uploads shared there.
Our ISP monopoly (which was owned by someone very close to the Supreme Leader's family) immediately saw a significant drop in traffic, and (according to hearsay, I had no way to check this) clients started leaving off in droves as they had no use for a fast internet connection anymore. 'Foreign' torrent trackers and video streaming were not very practical because of bad connectivity back then.
So the tracker was restored less than a week later and worked fine for about a decade after that.
Corruption finally worked for the good of the community.
Karrot_Kream 58 days ago [-]
In the US, ISPs offer lower uplink speeds than downlink speeds as a cost cutting measure. More modem channels will be dedicated to downlink than uplink. With fiber ISPs this is beginning to change, but ISP quality in the US quality is highly variable.
m4rtink 58 days ago [-]
Sadly this is not unheard off in Europe as well - for example here in Czech Republic I had to get the highest tier 1 Gbit/s dow load cable connection (including modem exchange) to get a measly 50 Mbit/s upload. And all the lower tiers have much less.
Thankfully not all local cable ISPs are like this, but that's what was available. And I don't think they can do this for much longer due to proliferation of cloud backups, teleconferencing, desktop/game streaming, etc.
dale_glass 58 days ago [-]
It's not so much a cost cutting measure as a competition measure.
A given channel has only so much bandwidth. In DSL and the like, you can pick which part is dedicated to up, and which to down because you only have one pair of wires to work with.
So your link gives you say, 100 Mbps. You can split that 50/50, but then your competition can go with a 90/10 split, and look, their downloads are much faster!
Karrot_Kream 57 days ago [-]
Right, it allows them to offer smaller pipes of 100 Mbit to users, which is a form of cost cutting. A true internet connection, the kind you'd get with your box in a colo or peering direct to an ISP or at an IX has symmetric uplink and downlink. The ISP is only willing to sell, say, 100 Mbit rather than the 180 Mbit equivalent a symmetric connection would provide.
58 days ago [-]
tete 58 days ago [-]
I'd say vendor lock-ins and mobile killed P2P.
With people replacing more and more laptop and desktop usage the platforms are Android and iOS. On both of these you essentially have to pay to release software, on top of that people more frequently have to pay for traffic, which oftentimes is in some for of ISP NAT, which might mean no or only certain incoming connections.
I think these platforms are the biggest inhibitor for P2P and actually also new technologies, because they are extremely constrained in what you can do compared to a typical desktop OS/environment.
musicale 58 days ago [-]
I think part of what has happened to decentralized internet systems is technological, and part of it is terminology.
It seems like we still have decentralized systems but now call them fediverse, distributed, decentralized, federated, ipfs, web 3.0, etc..
It might have been interesting if he'd looked at searches for those terms as well.
ilyt 58 days ago [-]
They aren't really decentralized in old meaning. It's more a bunch of fiefdoms than something actually decentralized.
Because at some point you need to deal with trolls/spammers at the gates
CarlosBaquero 58 days ago [-]
What happened to peer-to-peer as a technological concept? Actually, we still use a lot of that technology.
sliken 58 days ago [-]
Years ago, most computing devices were desktops. They often had a routable IP address, unlimited power, and would happily sit passing packets all day. This made things like a DHT practical, so you could find your other peers. This made things like the early days of skype where except for auth, chat and file sharing was p2p. After being online for long enough and having a routable IP, you could become a supernode to help less fortunate nodes talk to each other.
These days a much larger fraction of computing devices are on battery, on expensive networks like cellular, and can't really tolerate being part of a DHT. Increasing use of NAT/Masquerading makes a harder (and a support nightmare) to accept incoming packets from new peers.
One solution to this is to add a "superpeer" to a router distribution like OpenWRT, or sell the "plug/wallwart" to help. That way a cheap (under $100) computer could build reputation with it's peers, accept incoming packets form new peers, provide some storage, and keep up with DHT maintenance. Then low power and/or expensive network peers could just check their "home" superpeer and get what they need quickly with minimal bandwidth and power.
ilyt 58 days ago [-]
NAT was the problem since near beginning of P2P tho
> One solution to this is to add a "superpeer" to a router distribution like OpenWRT, or sell the "plug/wallwart" to help. That way a cheap (under $100) computer could build reputation with it's peers, accept incoming packets form new peers, provide some storage, and keep up with DHT maintenance.
...and do what exactly ? Don't have CPU power to do much, dont have storage to serve anything.
Also the same problems of "how do I exactly connect thru NAT" home router in same way, some of them might have IPv6 directly, but most are still behind some carrier grade NAT just like the phones are.
But I do like idea of evolving router a bit. Stuff like home automation should ideally just talk to MQTT queue on a router and then user is free to either install automation using it somewhere on the network, connect directly from phone, install container on the router running HomeAssistant or something, or pay some cloud service to ingest the MQTT stream and give them nice UI for it.
sliken 57 days ago [-]
> and do what exactly ? Don't have CPU power to do much, don't have storage to serve anything.
My new $140 router had 8 cores (4xA76 + 4xA55), 8GB ram, 32GB eMMC storage, and 2x2.5gb+gbe ports.
Even has a SD slot for more storage. My thinking is more along the lines of what can't it do. The low hanging fruit would be to replace maps.google.com (with p2p shared openstreemmaps or similar), drive.google.com/dropbox.com, chat/blog/twitter/instagram/snapchat/facebook and similar low hanging fruit. If you need more storage a 256GB sd card is $25 to $40. I believe the default storage for most google accounts is 17GB.
With a healthy P2P ecosystem you could leverage your peers, things like FileCoin could let you supplement your storage from any provider, and not depend on any single provider.
Running SHA256 on files, even reed-solomon, keeping track of your DHT peers, running IPFS or similar, even mastodon (once implemented in Go or Rust) shouldn't make newish hardware work hard.
Being in the router avoids the NAT issue, and if this kind of thing gets any traction. Anything outside the router will need working IPv6 (like Comcast in the USA), an accommodation from the router with port forwarding, or one of the various NAT traversal protocols like ICE, TURN, or STUN.
pbronez 58 days ago [-]
I agree that a pool of equal peers is tough these days. I think the Fediverse has a pretty good approach, where most end users are on mobile but you can spin up a server whenever.
There’s still a big complexity/skill/cost jump from “I toot from my iPhone” to “I run a mastodon instance for my company” though. Some of that can be addressed by managed hosting. It’s probably preferable to have a “super peer” though. In my mind, a superpeer runs the same software as a peer, but does more work because it can. It should be easier to maintain than a full server. I’m taking about the difference between:
A) manage a mastodon node, with its own redis, PostgreSQL, web server, object storage, etc
And
B) run BitTorrent in the background on your gaming PC to seed the latest cut of the niche documentary you’re working on
There’s a lot of interesting self-hosting projects happening, but they tend to focus on helping you run kubernetes or a similar container orchestrator. That’s still way more complex than an executable.
I think we need things to get a bit more opinionated again…
sliken 58 days ago [-]
Agreed. Supernodes should be near zero admin, and be able to run where advantageous, not just where an expert is available. OwnCloud/Nextcloud running every more services and at least looking at fediverse integration. Various NASs allow local applications for photos, plex, etc. All targeted at being nearly turnkey and friendly to the average consumer.
Ideally things become easy enough that people can depend less on FAANG and do more with services that are distributed. Hopefully the software can get to the point where your server helps when it can, but if it's down other servers would help out. I just bought a $120 ($140 with case) router, 8 cores, 8GB ram, 32GB storage, GigE + 2x2.5gbe. I've love to dedicate it for messaging, filesharing, mastodon, photo storage/viewing/sharing, etc and even have it exchange services with others so that message/photos/whatever can be shared even if it goes down. Hopefully it gets there, pretty amazing resources are available cheap these days. I'd happily trade 2/3rd of my resources for other peers ... if they did the same for me.
superkuh 58 days ago [-]
Smartphones took over as people's primary "computers" of choice. And mobile devices, generally, don't even get an IPv4 address with ports as most are behind carrier NAT. So most people cannot participate on the internet anymore and require third parties to hold their metaphorical hand when doing network operations.
For people still using actual computers with real internet connections and ports p2p is still as big, and as useful, as ever. It's just that the relative percentage of online users with actual internet connections has shrunk. The absolute number of people with real computers and connections has not shrunk.
littlestymaar 58 days ago [-]
Being behind a NAT poses constraints for p2p technologies (you need some well-known servers to do the hole-punching and act as a relay, but that's not too different from the well-know IPs that are needed for bootstraping a regular p2p system anyway, except of course, not every NAT are friendly to hole punching, and that's a problem as well…) but that also has a significant security and privacy advantage: since you aren't openly connected to the internet, you don't casually leak your computer's IP to the random strangers you're interacting with (at least when we're talking about a NAT you share with other people, not just your ISP box's NAT) and the amount of harm they can actually do to you is significantly lower.
In the end I think the internet would actually be a significantly better place security-wise for p2p if IPs weren't directly routable by default, and NAT with all its limitations gives you mostly that.
dinosaurdynasty 58 days ago [-]
NAT punching definitely tells other peers your NAT's IP address (and often your local address too, but that's less important).
Unless you're behind CGNAT, your NAT IP can often be used to find your neighborhood with public information. With private information (a legal challenge for example) you can find the exact subscriber/house.
littlestymaar 58 days ago [-]
> NAT punching definitely tells other peers your NAT's IP address
Yes, and that's all you share, so when the NAT is shared with other people (like other students on a campus for instance, or other customers of your phone mobile phone carrier) the amount of info that can be collected is much lower than if you have a public IP address for your computer.
> Unless you're behind CGNAT
Did you read what I wrote above, when I said: “at least when we're talking about a NAT you share with other people, not just your ISP box's NAT”.
> (and often your local address too, but that's less important).
Here you're mixing up the hole-punching part with the signaling protocol (ICE, which have had this issue in the past, before browsers switched to mDNS[1] instead of private IP addresses in ICE candidates).
You need a signaling protocol to do hole punching.
littlestymaar 57 days ago [-]
The two are working together to establish a p2p connection behind a NAT but that doesn't make them equivalent. It's like saying “UDP sometimes leaks your local IP address”, that's factually inaccurate.
ztetranz 58 days ago [-]
Here's an off-topic but somewhat related question that I've been meaning to ask somewhere.
How do "plug and play" consumer devices that receive an incoming call / connection work behind the typical home NAT router? I have an OOMA VOIP phone service which is plugged into my home router with no ports forwarded. It has no trouble receiving an incoming call.
Does it simply open an outgoing connection and hold it open indefinitely?
aliqot 58 days ago [-]
STUN or an intermediary
wmf 58 days ago [-]
Yes, that's pretty much the only way it could work.
lisbon44 58 days ago [-]
For 'Linux ISOs' some of the decline was bandwidth getting cheaper for mirror hosts, meaning that in place of P2P seeders there are now well-connected mirrors that are not 'bandwidth handicapped' in comparison to the past
matheusmoreira 58 days ago [-]
NAT happened. Everyone is behind a NAT these days. No way to directly network with any computer on the internet anymore.
58 days ago [-]
dottedmag 58 days ago [-]
It became... boring technology.
throwaway14356 58 days ago [-]
haha, so many tb's now for such small money
crosser 58 days ago [-]
> Coin generation and distribution, particularly when coins can be traded for fiat currency or goods, creates an incentive mechanism to keep the P2P system running.
Ability to trade for fiat currency proved to be a mixed blessing at best, and a downfall at worst, of this approach. When a system is so lucrative as a vehicle for ponzi schemes, it inevitably gets hijacked, and becomes unable to serve its declared purpose.
api 58 days ago [-]
ZeroTier is peer to peer but we don’t end up using the term very much. Loads of conferencing systems use P2P. WebRTC is P2P. Yet the term isn’t used there too much either. Same with games.
I think the term has just faded from the hype cycle.
austin-cheney 58 days ago [-]
I am writing a Node based P2P and messaging app for privacy. It’s mostly for linking personal devices as a single shared file system but it also has a security model for restricted sharing between trusted friends.
Yeah, you might argue that some bad actors took advantage of the tech and turned it into the endless of darkness. But I think, the fundamental reason here is simply lack of creativity.
You give people a protocol that is really good at distributing (large) files, guess how they going to use it? The cheapest use cases comes on top, of course. Make something both productive and beneficial is hard man!
(Not sure if this is a copyright sensitve question but curious)
https://en.wikipedia.org/wiki/BTDigg
My understanding is that many people consider piracy illegal in the "corporations paid to make it illegal because it threatened their business model" sense
That said.... Gnutella/Gnutella2, eDonkey2000, and others were objectively crappy designs, but still worked very well at one use case: distributing rare files. The thing is, if you have a stable high-speed mirror, the files aren't rare anymore. Hence the only files that are rare are the illegal ones, or ones that can't find a mirror to host them. There's just not much point to P2P. Sometimes there's good reason to distribute illegal files, like getting past state censorship. But the whole world isn't going to adopt a wonky solution for a rare use-case. Hence P2P has not & will not take off.
I don't know where you live, but in the EU we tend to take the net neutrality seriously, and that does not happen.
The largest BitTorrent tracker in the country was shut down by authorities for obvious reasons: Linux ISOs were far from the most popular uploads shared there.
Our ISP monopoly (which was owned by someone very close to the Supreme Leader's family) immediately saw a significant drop in traffic, and (according to hearsay, I had no way to check this) clients started leaving off in droves as they had no use for a fast internet connection anymore. 'Foreign' torrent trackers and video streaming were not very practical because of bad connectivity back then.
So the tracker was restored less than a week later and worked fine for about a decade after that.
Corruption finally worked for the good of the community.
Thankfully not all local cable ISPs are like this, but that's what was available. And I don't think they can do this for much longer due to proliferation of cloud backups, teleconferencing, desktop/game streaming, etc.
A given channel has only so much bandwidth. In DSL and the like, you can pick which part is dedicated to up, and which to down because you only have one pair of wires to work with.
So your link gives you say, 100 Mbps. You can split that 50/50, but then your competition can go with a 90/10 split, and look, their downloads are much faster!
With people replacing more and more laptop and desktop usage the platforms are Android and iOS. On both of these you essentially have to pay to release software, on top of that people more frequently have to pay for traffic, which oftentimes is in some for of ISP NAT, which might mean no or only certain incoming connections.
I think these platforms are the biggest inhibitor for P2P and actually also new technologies, because they are extremely constrained in what you can do compared to a typical desktop OS/environment.
It seems like we still have decentralized systems but now call them fediverse, distributed, decentralized, federated, ipfs, web 3.0, etc..
It might have been interesting if he'd looked at searches for those terms as well.
Because at some point you need to deal with trolls/spammers at the gates
These days a much larger fraction of computing devices are on battery, on expensive networks like cellular, and can't really tolerate being part of a DHT. Increasing use of NAT/Masquerading makes a harder (and a support nightmare) to accept incoming packets from new peers.
One solution to this is to add a "superpeer" to a router distribution like OpenWRT, or sell the "plug/wallwart" to help. That way a cheap (under $100) computer could build reputation with it's peers, accept incoming packets form new peers, provide some storage, and keep up with DHT maintenance. Then low power and/or expensive network peers could just check their "home" superpeer and get what they need quickly with minimal bandwidth and power.
> One solution to this is to add a "superpeer" to a router distribution like OpenWRT, or sell the "plug/wallwart" to help. That way a cheap (under $100) computer could build reputation with it's peers, accept incoming packets form new peers, provide some storage, and keep up with DHT maintenance.
...and do what exactly ? Don't have CPU power to do much, dont have storage to serve anything.
Also the same problems of "how do I exactly connect thru NAT" home router in same way, some of them might have IPv6 directly, but most are still behind some carrier grade NAT just like the phones are.
But I do like idea of evolving router a bit. Stuff like home automation should ideally just talk to MQTT queue on a router and then user is free to either install automation using it somewhere on the network, connect directly from phone, install container on the router running HomeAssistant or something, or pay some cloud service to ingest the MQTT stream and give them nice UI for it.
My new $140 router had 8 cores (4xA76 + 4xA55), 8GB ram, 32GB eMMC storage, and 2x2.5gb+gbe ports. Even has a SD slot for more storage. My thinking is more along the lines of what can't it do. The low hanging fruit would be to replace maps.google.com (with p2p shared openstreemmaps or similar), drive.google.com/dropbox.com, chat/blog/twitter/instagram/snapchat/facebook and similar low hanging fruit. If you need more storage a 256GB sd card is $25 to $40. I believe the default storage for most google accounts is 17GB.
With a healthy P2P ecosystem you could leverage your peers, things like FileCoin could let you supplement your storage from any provider, and not depend on any single provider.
Running SHA256 on files, even reed-solomon, keeping track of your DHT peers, running IPFS or similar, even mastodon (once implemented in Go or Rust) shouldn't make newish hardware work hard.
Being in the router avoids the NAT issue, and if this kind of thing gets any traction. Anything outside the router will need working IPv6 (like Comcast in the USA), an accommodation from the router with port forwarding, or one of the various NAT traversal protocols like ICE, TURN, or STUN.
There’s still a big complexity/skill/cost jump from “I toot from my iPhone” to “I run a mastodon instance for my company” though. Some of that can be addressed by managed hosting. It’s probably preferable to have a “super peer” though. In my mind, a superpeer runs the same software as a peer, but does more work because it can. It should be easier to maintain than a full server. I’m taking about the difference between:
A) manage a mastodon node, with its own redis, PostgreSQL, web server, object storage, etc
And
B) run BitTorrent in the background on your gaming PC to seed the latest cut of the niche documentary you’re working on
There’s a lot of interesting self-hosting projects happening, but they tend to focus on helping you run kubernetes or a similar container orchestrator. That’s still way more complex than an executable.
I think we need things to get a bit more opinionated again…
Ideally things become easy enough that people can depend less on FAANG and do more with services that are distributed. Hopefully the software can get to the point where your server helps when it can, but if it's down other servers would help out. I just bought a $120 ($140 with case) router, 8 cores, 8GB ram, 32GB storage, GigE + 2x2.5gbe. I've love to dedicate it for messaging, filesharing, mastodon, photo storage/viewing/sharing, etc and even have it exchange services with others so that message/photos/whatever can be shared even if it goes down. Hopefully it gets there, pretty amazing resources are available cheap these days. I'd happily trade 2/3rd of my resources for other peers ... if they did the same for me.
For people still using actual computers with real internet connections and ports p2p is still as big, and as useful, as ever. It's just that the relative percentage of online users with actual internet connections has shrunk. The absolute number of people with real computers and connections has not shrunk.
In the end I think the internet would actually be a significantly better place security-wise for p2p if IPs weren't directly routable by default, and NAT with all its limitations gives you mostly that.
Unless you're behind CGNAT, your NAT IP can often be used to find your neighborhood with public information. With private information (a legal challenge for example) you can find the exact subscriber/house.
Yes, and that's all you share, so when the NAT is shared with other people (like other students on a campus for instance, or other customers of your phone mobile phone carrier) the amount of info that can be collected is much lower than if you have a public IP address for your computer.
> Unless you're behind CGNAT
Did you read what I wrote above, when I said: “at least when we're talking about a NAT you share with other people, not just your ISP box's NAT”.
> (and often your local address too, but that's less important).
Here you're mixing up the hole-punching part with the signaling protocol (ICE, which have had this issue in the past, before browsers switched to mDNS[1] instead of private IP addresses in ICE candidates).
[1]: https://groups.google.com/g/discuss-webrtc/c/6stQXi72BEU?pli...
How do "plug and play" consumer devices that receive an incoming call / connection work behind the typical home NAT router? I have an OOMA VOIP phone service which is plugged into my home router with no ports forwarded. It has no trouble receiving an incoming call.
Does it simply open an outgoing connection and hold it open indefinitely?
Ability to trade for fiat currency proved to be a mixed blessing at best, and a downfall at worst, of this approach. When a system is so lucrative as a vehicle for ponzi schemes, it inevitably gets hijacked, and becomes unable to serve its declared purpose.
I think the term has just faded from the hype cycle.
https://github.com/prettydiff/share-file-systems
The application concept is difficult to explain so I have started calling it an operating system as the application is getting larger.
You can use WebRTC in the browser, or the Go implementation https://github.com/keroserene/snowflake