At the moment space is inserted between all tokens from which a type
consists. This adds extra spaces to types like VARCHAR(5) which become
"VARCHAR ( 5 )" which causes problems in some applications.
This patch modifies the way tokens are concatenated for a type. It makes
sure that the extra space isn't inserted before "(" and ")" and also
after "(".
When importing multiple CSV files at once, remove each entry from the
list of CSV files as its import completes. This way people can see the
list shrink visibly onscreen.
Also don't close the window if there are still files left to be
imported. This allows the user to import unchecked files, too, probably
using different settings.
See issue #1072.
At the moment when user types a custom type into the type combo box and
doesn't press enter, the entered type isn't used. This patch tries to fix
the problem by installing an event filter for the combo box and checking
when it loses focus - when it does, it performs type updates too.
On the way the patch also changes the signal used inside moveCurrentField()
from activated() to currentIndexChanged() to make it consistent with the
rest of the file.
When importing a CSV file into a table that doesn't exist yet (i.e. that
is created during the import), try to guess the data type of each column
based on the first couple of rows. If it is all floats or mixed floats
and integers, set the data type to REAL; if it is all integers, set the
data type to INTEGER; if it is anything else, set the data type to TEXT.
See issue #1003.
With this commit the main window keeps track of the hidden columns and
re-hides them when the table is selected again. It also saves the hidden
status to the project file and restores it from there.
This adds support for multiple foreign keys in a column constraint.
Example:
CREATE TABLE t1(a int, b int);
CREATE TABLE t2(a int, b int);
CREATE TABLE test(
x int REFERENCES t1(a) REFERENCES t2(a), -- This is now supported
y int
);
Multiple foreign keys if they are a table constraint were already
supported before.
This commit bundles a number of smaller optimisations in the CSV parser
and import code. They do add up to a noticible speed gain though (at
least on some systems and configurations).
We were separating the CSV import into two steps: parsing the CSV file
and inserting the parsed data. This had the advantages that it keeps the
parsing code and the database code nicely separated and that we have
full knowledge of the CSV file when we start inserting the data into the
database. However, this made it necessary to keep the entire parser
results in RAM. For large CSV files this uses enormous amounts of
memory.
This commit changes the import to parse the first 20 lines and analyse
them. This should give us a good impression of what to expect from the
rest of the file. Based on that information we then parse the file row
by row and insert each row into the database as soon as it is parsed.
This means we only have to keep one row at a time in memory while more
or less keeping the possibility to analyse the file before inserting
data.
On my system this does seem to change the runtime for small files which
take a little longer now (<5%), though these measurements aren't
conclusive. For large files it, however, it changes memory consumption
from using all memory and starting to swap within seconds to almost no
memory consumption at all. And not having to swap speeds things up a
lot.
Java epoch is measured in milliseconds from unix 0; conversion is a
(roundabout, best I could find) way to convert milliseconds into a
SQLite datetime while keeping millisecond resolution.
Easier alternative would be column_name / 1000, but loses _some_
theoretical resolution.
Instead of disabling the entire edit dock when the database is opened in
read only mode, only disable all buttons for making changes to the
field. This way the data can still be read using the edit dock.
See issue #1123.
In the Execute SQL tab, move the button for saving the results of a
query as either a CSV file or a view from the bottom of the results view
to the toolbar at the top.
See issue #1122.
When parsing a CSV file we used to check the column count for each row
and track the highest number of columns that we found. This information
then could be used to create an INSERT statement large enough for all
the data.
This column number tracking code is removed by this commit. Instead it
analyses the first 20 rows only. It does that while generating the field
list.
Performance-wise this should take a (very) little longer but makes it
easier to improve the performance in other ways later which should more
than compensate this commit.
Feature-wise this should fix some (technically invalid) corner-case CSV
files with fewer fields in the title row than in the other rows. It
should also break some other (technically invalid) corner-case CSV files
if they are imported into an existing table and have less columns than
the existing table in their first 20 rows but later on the exact same
number. Both cases, I think, don't matter too much.
Don't build a separate SQL statement per row to insert during CSV import
but use a single prepared statement which can be reused for each row.
This should speed up the CSV import noticeably.
When moving an existing table to a different schema which already
contains a table of that name, this causes an error. With this commit we
try to detect this type of error as early as possible.
This commit also updates the error message for that case. The old error
message still mentioned the 'temporary flag' which isn't correct
anymore.
Also, when changing the schema fails, reset the dropdown box back to a
working schema to avoid any confusion about whether the change worked or
not.
This replaces the checkbox for creating tables in the temporary schema
by a dropdown box that lets you select between all available schemata,
i.e. main, temp, and all attached databases. This way it becomes
possible to create new tables in attached databases as well as move
existing tables between all schemata.
This improves the Edit Table dialog to better handle moving tables from
one schema to another, though the UI currently only knows about the main
and the temp schema.
This also simplifies the grammar code by removing the temporary flag
from all classes because it's redundant now that we support multiple
schemata.
Commits 532fcd3f6b,
44eb2d4f99, and
ea1659e1d0 along with some smaller ones
prepared our code for properly handling schemata other than "main".
While working for any schema, they only exposed this funtionality for
the "temp" schema. But with these preparations in place it's easy to add
all known schemata to the UI and enable (almost) all features we have
for them. This is done by this commit, adding all attached databases to
the UI.
Similar to commit 44eb2d4f99 this commit
makes use of the backend code improvements introduced in commit
532fcd3f6b.
It adds support for database schemata other than "main" to the Browse
Data tab. With this it's possible again to browse and edit data of
temporary tables using the Browse Data tab. This time, however, they are
separated logically from "main" tables. So handling temporary tables
should be a lot less error prone now, plus it's easier to tell for the
user what tables goes in what schema.
This commit changes the project file format. There is some code included
which allows loading of project files in the old format. However,
project files generated using versions after this commit can't be loaded
by older versions of DB4S.
This improves commit 44eb2d4f99 by
allowing to choose tables from other schemata than "main" in the foreign
key editor in the Edit Table dialog. This still isn't perfect as only
tables from the schema of the current table should be shown but with
some care it should work for all use cases.
Commit 532fcd3f6b added support for
multiple database schemata to the backend code. While doing this, it
removed support for showing temporary database objects in the user
interface.
This functionally is partially reimplemented by this commit. With this
commit temporary database objects are shown in the Database Structure
tab and in the Db Structure dock. Unlike before however, they are
visually separated from 'normal' database objects. Also this commit
tries to make use of the new schema handling code wherever possible to
also separate temporary objects programatically from the normal ones.
This wasn't done in earlier versions and effectively was a source of
all sorts of errors.
This commit still lacks support for temporary tables in the foreign key
editor and in the Browse Data tab. Also a substantial amount of testing
is still required.
Remove the feature to select individual tables and indices to vacuum in
the vacuum dialog. Turns out SQLite doesn't support this (and apparently
never has). If you didn't select all tables at once, it would just print
errors to the console output. I have no idea why we ever implemented it
this way. However, the dialog could be reused to allow selection of
database schemata to compact - and this actually does work.
When closing a modified database a message box asking whether to save
the changes is popping up. The buttons were changed from Yes/No/Cancel
to Save/No/Cancel by commit 44361df4e9.
However, the code for this particular message box was still checking for
a Yes button click, and thus wasn't reacting on the Save button at all.
See issue #1117.
This adds initial basic support for handling different database schemata
at once to the backend code. This is still far from working properly but
shouldn't break much either - mostly because it's not really used yet in
the user interface code.
Editing a table column in the Edit Table dialog accidentally committed
all prior changes to the database, effectively clicking the 'Write
Changes' button while working on the table. This is no problem if your
working on a clean database, but is a problem if you have made other
changes before. In the latter case you lose the ability to roll them
back and you can't use the Cancel button in the Edit Table dialog
anymore.
See issue #1116.