I know we might be some time away from a mainnet launch of NeoFS.
But i think it might be valuable to look at decentralising storage further by splitting stored objects across a range of storage providers. Like a Raid setup but on a larger scale. That way a node runner is also protected against potential legal rammifications (by only hosting a part of an object). It’s not clear to me if you guys plan to build a Neon-wallet-like application for NeoFS but that would be AMAZING!
keep up the good work!
NeoFS actually does this at a number of levels. It implements a concept called
chunking to break uploaded files into uniformly sized
chunks. Those chunks are then replicated across the network for redundancy based on the file-owner’s needs. On request, the individual chunks are then provisioned and the original file is rebuilt on-the-fly.
I agree that the packaging of these features would be a good project collaboration between the COZ and NSPCC teams.
and does that speed up the data pull request, like it would wiith a raid setup in a pc?
or is it just for redundancy’s sake the way it’s implemented now…
NeoFS stores object according to the policy set by user. If you want to store your data in different geographical regions, to let the end-users access the data faster, you just need to express it in the Storage Policy you define for each Container.
We thought about ability to use third-party storage providers as a backend, but our experiments show that in practice it makes things more complicated, slower and less secure.
how large do you expect the network to be (in total storage size) lets say in the first quarter of it releasing?
Just after mainnet release there should be about 60-200Tb provided by NSPCC nodes. We hope community will setup about 300-500 more nodes during first year, but that’s hard to predict.
With chia growing the way it is, it might either be really easy to get fresh eyes on NeoFS or really hard. That’s still unclear to me. I know i’m saving 28 tb for this project specifically