After this comment to one of my question, I'm thinking if it is better using 1 database with X schemas or vice versa.
My situation: I'm developing a web-app where, when people register, I create (actually) a database (no, its not a social network: everyone must have access to his own data and never see the data of the other user).
That's the way I used for the previous version of my application (that is still running on mysql): through the plesk api, for every registration, I do:
- Create a database user with limited privileges;
- Create a database that can be accessed just by the previous created user and the superuser (for maintenance)
- Populate the db
Now, I'll need to do the same with postgresql (the project is getting mature and mysql.. don't fulfill all the needs)
I need to have all the databases/schemas backups independent: pg_dump works perfectly in both ways, the same for the users that can be configured to access just 1 schema or 1 database.
So, assuming you are more experienced potsgres users than me, what do you think is the best solution for my situation, and why?
Will there be performance differences using $x db instead of $x schemas? And what solution will be better to maintain in future (reliability)?
Edit: I almost forgot: all of my databases/schemas will always have the same structure!
Edit2: For the backups issue (using pg_dump), is maybe better using 1 db and many schemas, dumping all the schemas at once: recovering will be quite simple loading the main dump in a dev machine and then dump and restore just the schema needed: there is 1 additional step, but dumping all the schema seem faster then dumpin them one by one.
p.s: sorry if i forgot some 'W' char in the text, my keyboard suffer that button ;)
Well, the application structure and design are changed so much dirung those last two years. Im still using the
1 db with many schemas approach, but still, I have 1 database for each version of my application:
Db myapp_01 \_ my_customer_foo_schema \_ my_customer_bar_schema Db myapp_02 \_ my_customer_foo_schema \_ my_customer_bar_schema
For backups, im dumping each database regularly, then moving the backups on the dev server.
Im also using the PITR/WAL backup but, as I said before, its not likely i'll have to restore all database at once.. so it will probably be dismissed this year (in my situation is not the best approach).
The 1-db-many-schema approach worked very well for me since now, even if the app structure is totally changed:
i almost forgot: all of my databases/schemas will always have the same structure!
...now, every schema has its own structure that change dinamycally reacting to users data flow.
Using Amazon's EBS for MySQL hot backup
Should 'system data' be in a database?
Real world MySQL/Postgres database schema examples and analysis tools
Having many databases on a PostgreSQL installation can get problematic; having many schemas will work with no trouble.
Personal names in a global application: What to store
So you definitely want to go with one database and multiple schemas within that database..
Is there a way to get a difference report on two Jet (.mdb) databases?
free country, city database for sql server
Database development mistakes made by application developers [closed]
Finding alphabetical position in a large list
This allows me to dump all the database but restore just 1 very easily, in many ways:.
- Dump the db (all the schema), load the dump in a new db, dump just the schema i need, and restore back in main db
- Dump the schema separately, one by one (but I think the machine will suffer more this way - and I'm expecting like 500 schemas!)
- Create a template-schema
- When need to duplicate, rename it with new name
- Dump it
- Rename it back
- Restore the dump
- The magic is done.
import os import sys import pg #Take the new schema name from the second cmd arguments (the first is the filename) newSchema = sys.argv #Temp folder for the dumps dumpFile = '/test/dumps/' + str(newSchema) + '.sql' #Settings db_name = 'db_name' db_user = 'db_user' db_pass = 'db_pass' schema_as_template = 'schema_name' #Connection pgConnect = pg.connect(dbname= db_name, host='localhost', user= db_user, passwd= db_pass) #Rename schema with the new name pgConnect.query("ALTER SCHEMA " + schema_as_template + " RENAME TO " + str(newSchema)) #Dump it command = 'export PGPASSWORD="' + db_pass + '" && pg_dump -U ' + db_user + ' -n ' + str(newSchema) + ' ' + db_name + ' > ' + dumpFile os.system(command) #Rename back with its default name pgConnect.query("ALTER SCHEMA " + str(newSchema) + " RENAME TO " + schema_as_template) #Restore the previus dump to create the new schema restore = 'export PGPASSWORD="' + db_pass + '" && psql -U ' + db_user + ' -d ' + db_name + ' < ' + dumpFile os.system(restore) #Want to delete the dump file? os.remove(dumpFile) #Close connection pgConnect.close()
Databases are meant to differentiate between entire sets of data, while schemas are more like data entities.. For instance, you could have one database for an entire application with the schemas "UserManagement", "LongTermStorage" and so on.
"UserManagement" would then contain the "User" table, as well as all stored procedures, triggers, sequences etc.
that are needed for the user management.. Databases are entire programs, schemas are components..
For that reason, a database is really completely insulated from another database whilst schemas are not insulated from other schemas within the same database.
If you -for some reason- have to consolidate data across schemas in the future, it will be easy to do this over multiple schemas.
With multiple databases you would need multiple db-connections and collect and merge the data from each database "manually" by application logic.
. The latter have advantages in some cases, but for the major part I think the one-database-multiple-schemas approach is more useful..