
Hi, I'm evaluating MonetDB, and the following is based on reading the docs and some of the monetdb-users email list archive. I wonder if the following setup+workflow is correct, and if so, is it monetdb best practice, given this use case: We will have read only tables, each distributed across several machines/servers, they will be updated daily. There will be no cross table queries, i.e. only one table touched by each quert. We would like to be able to update each table while not affecting the availability of any other table. The table_db's are not horizontally sharded, i.e. all the data for a query will always come from that one table. We cannot use udp multicast/broadcast so a monetdb cluster is not possible (unless a localhost cluster is possible/sensible?). The setup+workflow: - one table per database (this allows for independent table updates), let these be <table_db>. - update a master (writable) instance of the table on a 'special' monetdbd/machine. - copy the updated table to each machine. - To update the table on a machine: $> monetbd lock <mytable_db> $> monetbd stop <mytable_db> $> mclient -u monetdb <mytable_db> updatefile $> cat updatefile copy into MyTable from ('path_to_mytable_col_file_i', 'path_to_mytable_col_file_f', 'path_to_mytable_col_file_s'); $> monetbd release <mytable_db> $> monetbd start <mytable_db> Is the above the best pattern/architecture of monetdb for such a use case? Appreciate any insights people can offer TIA Hedge -- πόλλ' οἶδ ἀλώπηξ, ἀλλ' ἐχῖνος ἓν μέγα [The fox knows many things, but the hedgehog knows one big thing.] Archilochus, Greek poet (c. 680 BC – c. 645 BC) http://hedgehogshiatus.com