1/17/2024 0 Comments Neooffice blue background color![]() #Frank Donnelly, Geospatial Data Librarian When we’re all done, we close the connection to the database. Voila! If successful, we return to the top of the table loop and grab the next table. Then I connect to Postgres, set my schema path, execute the create table statement, and load the values in. That gives me everything I need for the first table. Lastly, I count the number of columns and create the number of string substitutions I’ll need in my SQL statement in a place holder variable (minus the last character, to remove a comma from the end of the statement). Then I fetch all the rows for that table and save them in a tuple called rows. Then I loop through that list of tables and get the CREATE TABLE statement that’s stored in the master table and save that in string variable called create. This returns a series of tuples where the name of the table is in the first position I grab these and save them in a list. I connect to the SQLite database and read all the names of the tables that match my LIKE statement from the master table. The remaining variables are for the Postgres database: pgdb (database name), pguser, pgpswd (my username and password), pghost, pgport (address of the database server on port 5432), and pgschema which is the name of the schema I want to write to. For example, if my series of tables starts with yr followed by a year (yr2017) my string would be ‘yr%’. The SQLite variables are the name and path to the database (sqdb) and a string that I’ll use in a LIKE clause to grab certain tables (sqlike). The code is below.įirst I define a number of variables that I hard code for each batch of tables. The original CREATE TABLE statements are stored in a master table if I grab those statements and then the data that goes with them, I can simply recreate everything. I could connect to the SQLite database and load the tables into Python’s data structures, and then simply connect to my PostgreSQL database and write them out. ![]() ![]() That gave me a better idea – Python has SQLite and PostgreSQL modules ( sqlite3 and psycopg2 respectfully). The solutions I found involved lousy things like opening the dump file in an editor (not feasible if the file is huge) and finding and replacing parentheses and commas, or running the file through a script to remove them. I encountered a number of problems in doing this there are slight differences in how each database creates and handles dump files. My initial idea was to simply create a SQL dump file out of SQLite and then restore it in Postgres. ![]() We have a couple of different SQLite / Spatialite projects that we produce and I needed to move a large number of attribute tables from them into the Postgres database. We recently created a PostgreSQL / PostGIS database on a server on our local campus network and spent the last few months loading data into it. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |