This makes it more obvious to the user where the error in the input is
by just now allowing invalid characters.
It also prevents an annoying error message from popping up when entering
invalid database names and changing focus to another field afterwards.
See issue #1136.
Fix a crash which happened when opening a table in the Edit Table dialog
for modification and changing its schema and then (without reopening the
dialog) change the schema again or doing some other modifications to the
table.
See issue #1131.
In the Edit Data dialog we check if the data contained in a cell is an
image. This check returned some false positives, so this commit adds
another more sophisticated check to work around that.
See issue #1138 and #1159.
When pushing a local database to the remote server, the new commit id is
returned. Grab that id and insert it into the database of local
checkouts. If there already is an entry for the given file, update that
record. When doing an initial push of a database, also copy the source
database file to our checkout directory to avoid redownloading the file
upon first use.
This should eliminate all unnecessary database downloads, both after
doing the initial push and after committing a modified database.
Instead of just adding and adding records to the local checkout
database, search for a previous checkout and update its record. This
avoids all sorts of problems in the rest of the code which always
assumed that there is only one record per file.
This fixes a crash that occurs if there is an error while fetching
something other than a database and no database has been downloaded
before, e.g. when getting the root directory listing fails.
At the moment space is inserted between all tokens from which a type
consists. This adds extra spaces to types like VARCHAR(5) which become
"VARCHAR ( 5 )" which causes problems in some applications.
This patch modifies the way tokens are concatenated for a type. It makes
sure that the extra space isn't inserted before "(" and ")" and also
after "(".
When importing multiple CSV files at once, remove each entry from the
list of CSV files as its import completes. This way people can see the
list shrink visibly onscreen.
Also don't close the window if there are still files left to be
imported. This allows the user to import unchecked files, too, probably
using different settings.
See issue #1072.
At the moment when user types a custom type into the type combo box and
doesn't press enter, the entered type isn't used. This patch tries to fix
the problem by installing an event filter for the combo box and checking
when it loses focus - when it does, it performs type updates too.
On the way the patch also changes the signal used inside moveCurrentField()
from activated() to currentIndexChanged() to make it consistent with the
rest of the file.
When importing a CSV file into a table that doesn't exist yet (i.e. that
is created during the import), try to guess the data type of each column
based on the first couple of rows. If it is all floats or mixed floats
and integers, set the data type to REAL; if it is all integers, set the
data type to INTEGER; if it is anything else, set the data type to TEXT.
See issue #1003.
With this commit the main window keeps track of the hidden columns and
re-hides them when the table is selected again. It also saves the hidden
status to the project file and restores it from there.
This adds support for multiple foreign keys in a column constraint.
Example:
CREATE TABLE t1(a int, b int);
CREATE TABLE t2(a int, b int);
CREATE TABLE test(
x int REFERENCES t1(a) REFERENCES t2(a), -- This is now supported
y int
);
Multiple foreign keys if they are a table constraint were already
supported before.
This commit bundles a number of smaller optimisations in the CSV parser
and import code. They do add up to a noticible speed gain though (at
least on some systems and configurations).
We were separating the CSV import into two steps: parsing the CSV file
and inserting the parsed data. This had the advantages that it keeps the
parsing code and the database code nicely separated and that we have
full knowledge of the CSV file when we start inserting the data into the
database. However, this made it necessary to keep the entire parser
results in RAM. For large CSV files this uses enormous amounts of
memory.
This commit changes the import to parse the first 20 lines and analyse
them. This should give us a good impression of what to expect from the
rest of the file. Based on that information we then parse the file row
by row and insert each row into the database as soon as it is parsed.
This means we only have to keep one row at a time in memory while more
or less keeping the possibility to analyse the file before inserting
data.
On my system this does seem to change the runtime for small files which
take a little longer now (<5%), though these measurements aren't
conclusive. For large files it, however, it changes memory consumption
from using all memory and starting to swap within seconds to almost no
memory consumption at all. And not having to swap speeds things up a
lot.
Java epoch is measured in milliseconds from unix 0; conversion is a
(roundabout, best I could find) way to convert milliseconds into a
SQLite datetime while keeping millisecond resolution.
Easier alternative would be column_name / 1000, but loses _some_
theoretical resolution.
Instead of disabling the entire edit dock when the database is opened in
read only mode, only disable all buttons for making changes to the
field. This way the data can still be read using the edit dock.
See issue #1123.