
300 M rows, each 1 byte wide, i.e., 300 MB of data, easily fit in the main memory of my mobile phone. 300 M rows, each 1 kB wide, i.e., 300 GB of data, easily fit in the main memory of a 500 GB RAM server. 300 M rows, each 1 MB wide, i.e., 300 TB of data, require at least 75 4TB hard disk to store it in the first place. Depending on workload, data characteristics, I/O subsystem, CPU power, network infrastructure, performance requirements, etc. also a much smaller machine might be sufficient, a much larger machine might be required, or a distributed solution might be useful. Having said that, I'm convinced that integrating MonetDB & hadoop is possible, given the required software engineering skills. Whether / when that's useful / "needed" is a different question that cannot be answered in general. MonetDB does not come with a ready-to-use solution. Stefan Edgar Mejia <omejiasq@gmail.com> wrote:
if for example i have 300 millions of rows, i need to use something as hadoop? is posible integrate monetdb as hadoop?
if i use only monetdb, and i have 300 millions of rows, how much memory and processor is recommended as minimum?
Thanks,
EM
------------------------------------------------------------------------
_______________________________________________ users-list mailing list users-list@monetdb.org https://www.monetdb.org/mailman/listinfo/users-list
-- | Stefan.Manegold@CWI.nl | Database Architectures (DA) | | www.CWI.nl/~manegold | Science Park 123 (L321) | | +31 (0)20 592-4212 | 1098 XG Amsterdam (NL) |