Remove query apis and improve docs in preparation for v3.0.

More context in the CHANGELOG.
This commit is contained in:
Sebastian Jeltsch
2024-12-05 22:01:35 +01:00
parent 996bc27788
commit fb0fe373f5
19 changed files with 133 additions and 700 deletions

View File

@@ -1,3 +1,39 @@
## v0.3.0
A foundational overhaul of SQLite's integration and orchestration. This will
unlock more features in the future and already improves performance.
Write performance roughly doubled and read latencies are are down by about two
thirds to sub-milliseconds 🏃:
* Replaced the libsql rust bindings with rusqlite and the libsql fork of SQLite
with vanilla SQLite.
* The bindings specifically are sub-par as witnessed by libsql-server itself
using a forked rusqlite.
* Besides some missing APIs like `update_hooks`, which we require for realtime
APIs in the future, the implemented execution model is not ideal for
high-concurrency.
* The libsql fork is also slowly getting more and more outdated missing out on
recent SQLite development.
* The idea of a more inclusive SQLite is great but the manifesto hasn't yet
manifested itself. It seems the owners are currently focused on
libsql-server and another fork called limbo. Time will tell, we can always
revisit.
Other breaking changes:
* Removed Query APIs in favor of JS/TS APIs, which were added in v0.2. The JS
runtime is a lot more versatile and provides general I/O. Moreover, query APIs
weren't very integrated yet, for one they were missing an Admin UI. We would
rather spent the effort on realtime APIs instead.
If you have an existing configuration, you need to strip the `query_apis`
top-level field to satisfy the textproto parser. We could have left the
field as deprecated but since there aren't any users yet, might as well...
Other changes:
* Replaced libsql's vector search with sqlite-vec.
* Reduced logging overhead.
## v0.2.6
* Type JSON more strictly.

View File

@@ -8,8 +8,15 @@
<p align="center">
A <a href="https://trailbase.io/reference/benchmarks/">blazingly</a> fast,
single-file, open-source application server with type-safe APIs, built-in
JS/ES6/TS Runtime, Auth, and Admin UI built on Rust+SQLite+V8.
open-source application server with type-safe APIs, built-in JS/ES6/TS
Runtime, Auth, and Admin UI built on Rust, SQLite & V8.
<p>
<p align="center">
Simplify with fewer moving parts: an easy to self-host, single-file,
extensible backend for your mobile, web or desktop application.
Sub-millisecond latencies eliminate the need for dedicated caches, no more
stale or inconsistent data.
<p>
<p align="center">
@@ -41,11 +48,11 @@
Try the <a href="https://demo.trailbase.io/_/admin" target="_blank">demo</a> online - Email: <em>admin@localhost</em>, password: <em>secret</em>.
</p>
For more context, documentation, and an online live demo, check out our website
For more context, documentation, and a live demo, check out the website:
[trailbase.io](https://trailbase.io).
Questions? Thoughts? Check out the [FAQ](https://trailbase.io/reference/faq/)
on our website or reach out.
If you like TrailBase or its prospect, consider leaving a ⭐🙏.
Questions? Thoughts? - Take a look at the
[FAQ](https://trailbase.io/reference/faq/) or reach out.
If you like TrailBase or want to follow along, consider leaving a ⭐🙏.
## Project Structure & Releases
@@ -58,7 +65,7 @@ Pre-built static binaries are available as [GitHub
releases](https://github.com/trailbaseio/trailbase/releases/) for Linux an
MacOS.
Moreover, client packages and containers are available via:
Moreover, containers and client packages are available via:
- [Docker](https://hub.docker.com/r/trailbase/trailbase)
- [JavaScript/Typescript client](https://www.npmjs.com/package/trailbase)

View File

@@ -34,19 +34,6 @@ record_apis: [
acl_authenticated: [CREATE, READ, UPDATE, DELETE]
}
]
query_apis: [
{
name: "simple_query_api"
virtual_table_name: "simple_query_api"
params: [
{
name: "number"
type: INTEGER
}
]
acl: WORLD
}
]
schemas: [
{
name: "simple_schema"

View File

@@ -45,7 +45,5 @@ INSERT INTO virtual_spatial_index VALUES
-- (INSERT INTO virtual_spatial_index VALUES ($1, $2, $3, $4, $5, uuid_v7()) RETURNING *));
-- Create a virtual table based on a stored procedure.
--
-- This virtual table is also exposed as a Query API in the config. To see in
-- action browse to: http://localhost:4000/api/query/v1/simple_query_api?number=4.
CREATE VIRTUAL TABLE simple_query_api USING define((SELECT UNIXEPOCH() AS epoch, $1 AS random_number));
CREATE VIRTUAL TABLE simple_vtable_from_stored_procedure
USING define((SELECT UNIXEPOCH() AS epoch, $1 AS random_number));

View File

@@ -10,5 +10,5 @@ For context, some larger features we have on our Roadmap:
Also, service-accounts to auth other backends as opposed to end-users.
- Many SQLite databases: imagine a separate database by tenant or user.
- TLS termination and proxy capabilities.
- Consider a GraphQL layer to address fan-out and integrate external
resources.
- We might want to address fan-out and the integration of external resources
through GraphQL or similar.

View File

@@ -51,7 +51,7 @@ Likewise, TrailBase has a few nifty tricks up its sleeve:
- Language independent type-safety via JSON Schemas with strict typing
being enforced all the way down to the database level[^4].
- TrailBase's JavaScript runtime supports full ES6, TypeScript transpilation,
and is built on V8 making it [~45x faster](/reference/benchmarks/).
and is built on V8 making it [~40x faster](/reference/benchmarks/).
- First-class access to all of SQLite's features and capabilities.
- A simple auth UI.
- Stateless JWT auth-tokens for simple, hermetic authentication in other
@@ -61,19 +61,17 @@ Likewise, TrailBase has a few nifty tricks up its sleeve:
### Language & Performance
Another difference is that PocketBase and TrailBase are written in Go and Rust,
respectively, which may matter to you especially when modifying either or using
them as "frameworks".
Another difference is that PocketBase is written in Go, while TrailBase uses
Rust. Beyond feelings this may matter to you when using them as a "framework"
or modifying the core.
Beyond personal preferences, both languages are speedy options in practice.
In practice, both languages are speedy options with a rich ecosystem.
That said, Rust's lack of a runtime and lower FFI overhead should make it the
more performant choice.
To our own surprise, we found a significant gap. TrailBase is roughly 3.5x to
7x faster, in our [simplistic micro-benchmarks](/reference/benchmarks/)
depending on the use-case.
Not to toot our own horn, this is mostly thanks to combining a very low
overhead language, one of the fastest HTTP servers, a V8 engine, and incredibly
quick SQLite.
Measuring we found a significant gap with TrailBase's APIs being roughly
[10x and JS runtime 40x faster](/reference/benchmarks/).
This is the result of SQLite of first-class JS engines being so quick that even
small overheads weight heavily.
<div class="h-[30px]" />

View File

@@ -46,13 +46,10 @@ addRoute("GET", "/test/:table", stringHandler(async (req) => {
return `entries: ${rows[0][0]}`;
}
throw new HttpError(StatusCodes.BAD_REQUEST, "Missing '?table=' search query parm");
throw new HttpError(
StatusCodes.BAD_REQUEST, "Missing '?table=' search query parm");
}));
```
More examples can be found in the repository in
`client/testfixture/scripts/index.ts`.
<Aside type="note" title="ToDO">
Needs more extensive documentation.
</Aside>

View File

@@ -1,56 +0,0 @@
---
title: Query APIs
---
import { Aside } from "@astrojs/starlight/components";
Query APIs are a more free-form and type-unsafe way of exposing data using
virtual tables based on user inputs and stored procedures. Please make sure to
take a look at [record APIs](/documentation/apis/record_apis) first. Views and
generated columns may be a better fit for transforming data if no explicit user
input is required.
<Aside type="note" title="Note">
Query APIs fill a gap that in other frameworks is often filled by custom
handlers. TrailBase may go this direction as well either with custom Axum
handlers or embedding another runtime. At least for the time being Query APIs
based on stored procedures are simply a very constrained (e.g. read-only) and
performant way to achieve similar goals.
</Aside>
## Example
Using migrations and sqlean's `define` we can define a table query with unbound
inputs (see placeholder $1):
```sql
CREATE VIRTUAL TABLE
_is_editor
USING
define((SELECT EXISTS (SELECT * FROM editors WHERE user = $1) AS is_editor));
```
Subsequently, an API can be configured to query the newly created `VIRTUAL
TABLE`, also binding URL query parameters as inputs to above placeholders.
```proto
query_apis: [
{
name: "is_editor"
virtual_table_name: "_is_editor"
params: [
{
name: "user"
type: BLOB
}
]
acl: WORLD
}
]
```
Finally, we can query the API, e.g. using curl:
```bash
curl -g 'localhost:4000/api/query/v1/is_editor?user=<b64_user_id>'
```

View File

@@ -184,22 +184,27 @@ handling in place so that only metadata is stored in the underlying table while
the actual files are kept in an object store.
By adding a `TEXT` column with a `CHECK(jsonschema('std.FileUpload'))`
constrained to your TABLE, you instruct TrailBase to store file metadata as
defined by the "std.FileUpload" JSON schema and write the contents off to
object storage.
Files can then be upload by sending the contents as part your JSON or
`multipart/form-data` POST request.
constrained to your table definition, you instruct TrailBase to store file
metadata as defined by the "std.FileUpload" JSON schema while keeping the contents
in a separate object store.
Files can then be upload by sending their contents as part of your JSON
requests or `multipart/form-data` POST requests.
Downloading files is slightly different, since reading the column through
record APIs will only yield the metadata. There's a dedicated GET API endpoint
for file downloads:
`/api/v1/records/<record_api_name>/<record_id>/file/<column_name>`
### S3 Integration
By default, TrailBase will keep the object store on the local file system under
`<data-dir>/uploads`.
Alternatively, one can configure an S3 bucket via the
[configuration file](https://github.com/trailbaseio/trailbase/blob/38e0580fdc3109523cc66387a6b1a30259b270bd/proto/config.proto#L57),
it's not yet accessible through the admin dashboard.
If you need support for
[other storage backends](https://docs.rs/object_store/latest/object_store/#available-objectstore-implementations),
let us know.
<Aside type="note" title="S3">
In principle, TrailBase can also S3 object storage, however the settings
aren't yet wired through. Currently uploads are stored under
`--data-dir/uploads` in the local file system.
</Aside>
## Custom JSON Schemas

View File

@@ -5,35 +5,21 @@ description: Collocating your logic
import { Aside } from "@astrojs/starlight/components";
This article explores different ways to extend TrailBase and integrate your own
custom logic.
This article explores different ways to integrate your App with TrailBase,
extend it and your custom logic.
## The Elephant in the Room
The question of where code should run weighs heavily on the web: push
everything to the server, more to the client or even the edge?
Answering this question is s a lot simpler for rich client-side applications
such as mobile, desktop, and progressive web apps or SPAs where the inclination
is to run on the users device providing privacy friendly, snappy interactivity
and offline capabilities.
There are perfectly good reasons to not run everything in an untrusted, battery
limited, SEO unfriendly client-side sandbox but the overall need for
server-side execution is greatly reduced.
**It's rich client-side apps where application servers like TrailBase can shine
providing common server-side functionality and strategic extension points**.
The question on where your code should run is as old as the modern internets
becoming ever present since moving away from a static mainframe model and
hermetic desktop applications.
With pushing more interactive applications to slow platforms, such as early
browsers or mobile phone, there was an increased need to distribute
applications with interactivity happening in the front-end and heavy lifting
happening in a back-end.
That's not to say that there aren't other good reasons to not just run all your
code in an untrusted, potentially slow client-side sandbox.
In any case, having a rich client-side application like a mobile, desktop or
progressive web apps will reduce your need for server-side integrations.
They're often a good place to start [^1], even if over time you decide to move more
logic to a backend to address issues like high fan-out, initial load
times, and SEO for web applications.
Inversely, if you have an existing application that is mostly running
server-side, you probably already have a database, auth, and are hosting your
own APIs, ... .
If so, there's intrinsically less any application base can help you with.
Remaining use-cases might be piece-meal adoption to speed up existing APIs or
delegate authentication.
One advantage of lightweight, self-hosted solutions is that they can be
co-locate with your existing stack to reduce costs and latency.
## Bring your own Backend
@@ -46,27 +32,23 @@ services.
Its stateless tokens using asymmetric crypto make it easy for other resource
servers to hermetically authenticate your users.
TrailBase's APIs can be accessed transitively, simply by forwarding user
tokens.
tokens [^1].
Alternatively, you can fall back to raw SQLite for reads, writes and even
schema alterations[^2].
<Aside type="note" title="Service Accounts">
We would like to add service accounts in the future to authorize privileged
services independent from user-provided tokens or using fake user-accounts
for services.
</Aside>
## Custom APIs in TrailBase
TrailBase provides three main ways to embed your code and expose custom APIs:
TrailBase provides a couple of ways to embed custom logic and provide custom APIs endpoints:
1. Rust/Axum handlers.
2. Stored procedures & [Query APIs](/documentation/apis/query_apis/)
3. SQLite extensions, virtual table modules & [Query APIs](/documentation/apis/query_apis/)
1. Rust HTTP handlers using Axum,
2. JS/TS handlers [APIs](/documentation/apis/js_apis/),
3. Stored database procedures,
3. SQLite extensions and modules (virtual tables).
Beware that the Rust APIs and [Query APIs](/documentation/apis/query_apis/) are
likely subject to change. We rely on semantic versioning to explicitly signal
breaking changes.
<Aside type="note" title="Rust Handlers">
the Rust APIs are subject to change. However, we will rely on semantic
versioning to communicate breaking changes explicitly.
</Aside>
### Using ES6 JavaScript & TypeScript
@@ -82,17 +64,16 @@ That said, similar to using PocketBase as a Go framework, you can build your
own TrailBase binary and register custom Axum handlers written in rust with the
main application router, see `/examples/custom-binary`.
### Stored Procedures & Query APIs
### Stored Procedures
Unlike Postgres or MySQL, SQLite does not support stored procedures out of
the box.
TrailBase has adopted sqlean's
Unlike Postgres or MySQL, SQLite does not support stored procedures out of the
box.
However, TrailBase has integrated sqlean's
[user-defined functions](https://github.com/nalgeon/sqlean/blob/main/docs/define.md)
to provide similar functionality and minimize lock-in over vanilla SQLite.
Check out [Query APIs](/documentation/apis/query_apis/), to see how stored
procedures can be hooked up.
to fill the gap. You can easily adopt SQLean in your own backends avoiding
lock-in.
### SQLite extensions, virtual table modules & Query APIs
### SQLite Extensions and Modules a.k.a. Virtual Tables
Likely the most bespoke approach is to expose your functionality as a custom
SQLite extension or module similar to how TrailBase extends SQLite itself.
@@ -113,10 +94,11 @@ modules are:
<div class="h-[30px]" />
---
[^1]:
There are genuinely good properties in terms of latency, interactivity, offline
capabilities and privacy when processing your users' data locally on their
device.
We would like to add service accounts in the future to authorize privileged
services independent from user-provided tokens or using fake user-accounts
for services.
[^2]:
SQLite is running in WAL mode, which allows for parallel reads and

View File

@@ -8,10 +8,10 @@ hero:
built-in JS/ES6/TS Runtime, Auth, and Admin UI built on Rust, SQLite & V8.
<br />
<br />
Simplify your stack&colon; an easy to self-host, extensible, single-file
backend for your mobile, PWA, or desktop application.
Sub-millisecond latencies eliminate the need for dedicated caching
infrastructure and an entire class of issues.
Simplify with fewer moving parts&colon; an easy to self-host, single-file,
extensible backend for your mobile, web or desktop application.
Sub-millisecond latencies eliminate the need for dedicated caches, no more
stale or inconsistent data.
image:
file: ../../assets/logo_512.webp
@@ -55,12 +55,13 @@ import { Duration100kInsertsChart } from "./reference/_benchmarks/benchmarks.tsx
* SQLite: one of the fastest full-SQL databases,
* V8: one of the fastest JS engines.
TrailBase APIs are [10x faster than PocketBase's and 20x faster than SupaBase's
needing only a fraction of the footprint](/reference/benchmarks), allowing
you to serve millions of customers from a tiny box.
TrailBase's APIs are [10x faster than PocketBase's and 20x faster than SupaBase's
with a fraction of the footprint](/reference/benchmarks) allowing you to
serve millions of customers from a tiny box.
In terms of JS/TS performance, V8 is roughly 40x faster than goja used by
PocketBase.
In terms of JS/TS performance, V8 is roughly
[40x faster](/reference/benchmarks#javascript-runtime-benchmarks)
than goja used by PocketBase.
</div>
<div slot="second">

View File

@@ -191,9 +191,9 @@ diminished by the time it takes to compute `fibonacci(N)` for sufficiently
large `N`.
{/*
Output:
TB: Called "/fibonacci" for fib(40) 100 times, took 0:00:14.988703 (limit=64)
PB: Called "/fibonacci" for fib(40) 100 times, took 0:10:01.096053 (limit=64)
Output:
TB: Called "/fibonacci" for fib(40) 100 times, took 0:00:14.988703 (limit=64)
PB: Called "/fibonacci" for fib(40) 100 times, took 0:10:01.096053 (limit=64)
*/}
We found that for `N=40`, V8 (TrailBase) is around 40 times faster than

View File

@@ -140,48 +140,6 @@ message RecordApiConfig {
optional string schema_access_rule = 15;
}
enum QueryApiParameterType {
TEXT = 1;
BLOB = 2;
INTEGER = 3;
REAL = 4;
}
message QueryApiParameter {
optional string name = 1;
optional QueryApiParameterType type = 2;
}
enum QueryApiAcl {
QUERY_API_ACL_UNDEFINED = 0;
WORLD = 1;
AUTHENTICATED = 2;
}
/// Configuration schema for Query APIs.
///
/// Note that unlike record APIs, query APIs are read-only,
/// which simplifies authorization.
/// That said, query APIs are backed by virtual tables, thus in theory, they
/// could allow writes (unlike views) in the future for module implementations
/// that allow it such as SQLite's R*-tree.
message QueryApiConfig {
optional string name = 1;
optional string virtual_table_name = 2;
/// Query parameters the Query API will accept and forward to the virtual
/// table (function) as argument expressions.
repeated QueryApiParameter params = 3;
// Read access control.
optional QueryApiAcl acl = 8;
optional string access_rule = 9;
// TODO: We might want to consider requiring or allowing to specify an
// optional JSON schema for query APIs to allow generating client bindings.
}
message JsonSchemaConfig {
optional string name = 1;
optional string schema = 2;
@@ -196,7 +154,6 @@ message Config {
required AuthConfig auth = 4;
repeated RecordApiConfig record_apis = 11;
repeated QueryApiConfig query_apis = 12;
repeated JsonSchemaConfig schemas = 21;
}

View File

@@ -5,13 +5,12 @@ use std::sync::Arc;
use crate::auth::jwt::JwtHelper;
use crate::auth::oauth::providers::{ConfiguredOAuthProviders, OAuthProviderType};
use crate::config::proto::{Config, QueryApiConfig, RecordApiConfig, S3StorageConfig};
use crate::config::proto::{Config, RecordApiConfig, S3StorageConfig};
use crate::config::{validate_config, write_config_and_vault_textproto};
use crate::constants::SITE_URL_DEFAULT;
use crate::data_dir::DataDir;
use crate::email::Mailer;
use crate::js::RuntimeHandle;
use crate::query::QueryApi;
use crate::records::RecordApi;
use crate::table_metadata::TableMetadataCache;
use crate::value_notifier::{Computed, ValueNotifier};
@@ -26,7 +25,6 @@ struct InternalState {
oauth: Computed<ConfiguredOAuthProviders, Config>,
mailer: Computed<Mailer, Config>,
record_apis: Computed<Vec<(String, RecordApi)>, Config>,
query_apis: Computed<Vec<(String, QueryApi)>, Config>,
config: ValueNotifier<Config>,
logs_conn: trailbase_sqlite::Connection,
@@ -67,8 +65,7 @@ impl AppState {
let config = ValueNotifier::new(args.config);
let table_metadata_clone = args.table_metadata.clone();
let conn_clone0 = args.conn.clone();
let conn_clone1 = args.conn.clone();
let conn_clone = args.conn.clone();
let runtime = args
.js_runtime_threads
@@ -95,7 +92,7 @@ impl AppState {
.record_apis
.iter()
.filter_map(|config| {
match build_record_api(conn_clone0.clone(), &table_metadata_clone, config.clone()) {
match build_record_api(conn_clone.clone(), &table_metadata_clone, config.clone()) {
Ok(api) => Some((api.api_name().to_string(), api)),
Err(err) => {
error!("{err}");
@@ -105,21 +102,6 @@ impl AppState {
})
.collect::<Vec<_>>();
}),
query_apis: Computed::new(&config, move |c| {
return c
.query_apis
.iter()
.filter_map(
|config| match build_query_api(conn_clone1.clone(), config.clone()) {
Ok(api) => Some((api.api_name().to_string(), api)),
Err(err) => {
error!("{err}");
None
}
},
)
.collect::<Vec<_>>();
}),
config,
conn: args.conn.clone(),
logs_conn: args.logs_conn,
@@ -209,15 +191,6 @@ impl AppState {
return None;
}
pub(crate) fn lookup_query_api(&self, name: &str) -> Option<QueryApi> {
for (query_api_name, query_api) in self.state.query_apis.load().iter() {
if query_api_name == name {
return Some(query_api.clone());
}
}
return None;
}
pub fn get_config(&self) -> Config {
return (*self.state.config.load_full()).clone();
}
@@ -364,8 +337,7 @@ pub async fn test_state(options: Option<TestStateOptions>) -> anyhow::Result<App
validate_config(&table_metadata, &config).unwrap();
let config = ValueNotifier::new(config);
let main_conn_clone0 = conn.clone();
let main_conn_clone1 = conn.clone();
let main_conn_clone = conn.clone();
let table_metadata_clone = table_metadata.clone();
let data_dir = DataDir(temp_dir.path().to_path_buf());
@@ -406,7 +378,7 @@ pub async fn test_state(options: Option<TestStateOptions>) -> anyhow::Result<App
.iter()
.filter_map(|config| {
let api = build_record_api(
main_conn_clone0.clone(),
main_conn_clone.clone(),
&table_metadata_clone,
config.clone(),
)
@@ -416,17 +388,6 @@ pub async fn test_state(options: Option<TestStateOptions>) -> anyhow::Result<App
})
.collect::<Vec<_>>();
}),
query_apis: Computed::new(&config, move |c| {
return c
.query_apis
.iter()
.filter_map(|config| {
let api = build_query_api(main_conn_clone1.clone(), config.clone()).unwrap();
return Some((api.api_name().to_string(), api));
})
.collect::<Vec<_>>();
}),
config,
conn,
logs_conn,
@@ -459,14 +420,6 @@ fn build_record_api(
return Err(format!("RecordApi references missing table: {config:?}"));
}
fn build_query_api(
conn: trailbase_sqlite::Connection,
config: QueryApiConfig,
) -> Result<QueryApi, String> {
// TODO: Check virtual table exists
return QueryApi::from(conn, config);
}
pub(crate) fn build_objectstore(
data_dir: &DataDir,
config: Option<&S3StorageConfig>,

View File

@@ -18,7 +18,6 @@ mod extract;
mod js;
mod listing;
mod migrations;
mod query;
mod scheduler;
mod schema;
mod server;

View File

@@ -1,84 +0,0 @@
use axum::body::Body;
use axum::http::{header::CONTENT_TYPE, StatusCode};
use axum::response::{IntoResponse, Response};
use log::*;
use thiserror::Error;
/// Publicly visible errors of record APIs.
///
/// This error is deliberately opaque and kept very close to HTTP error codes to avoid the leaking
/// of internals and provide a very clear mapping to codes.
/// NOTE: Do not use thiserror's #from, all mappings should be explicit.
#[derive(Debug, Error)]
pub enum QueryError {
#[error("Api Not Found")]
ApiNotFound,
#[error("Forbidden")]
Forbidden,
#[error("Bad request: {0}")]
BadRequest(&'static str),
#[error("Internal: {0}")]
Internal(Box<dyn std::error::Error + Send + Sync>),
}
impl From<trailbase_sqlite::Error> for QueryError {
fn from(err: trailbase_sqlite::Error) -> Self {
return match err {
trailbase_sqlite::Error::Rusqlite(err) => match err {
// rusqlite::Error::QueryReturnedNoRows => {
// #[cfg(debug_assertions)]
// info!("rusqlite returned empty rows error");
//
// Self::RecordNotFound
// }
rusqlite::Error::SqliteFailure(err, _msg) => {
match err.extended_code {
// List of error codes: https://www.sqlite.org/rescode.html
275 => Self::BadRequest("sqlite constraint: check"),
531 => Self::BadRequest("sqlite constraint: commit hook"),
3091 => Self::BadRequest("sqlite constraint: data type"),
787 => Self::BadRequest("sqlite constraint: fk"),
1043 => Self::BadRequest("sqlite constraint: function"),
1299 => Self::BadRequest("sqlite constraint: not null"),
2835 => Self::BadRequest("sqlite constraint: pinned"),
1555 => Self::BadRequest("sqlite constraint: pk"),
2579 => Self::BadRequest("sqlite constraint: row id"),
1811 => Self::BadRequest("sqlite constraint: trigger"),
2067 => Self::BadRequest("sqlite constraint: unique"),
2323 => Self::BadRequest("sqlite constraint: vtab"),
_ => Self::Internal(err.into()),
}
}
_ => Self::Internal(err.into()),
},
err => Self::Internal(err.into()),
};
}
}
impl IntoResponse for QueryError {
fn into_response(self) -> Response {
let (status, body) = match self {
Self::ApiNotFound => (StatusCode::METHOD_NOT_ALLOWED, None),
Self::Forbidden => (StatusCode::FORBIDDEN, None),
Self::BadRequest(msg) => (StatusCode::BAD_REQUEST, Some(msg.to_string())),
Self::Internal(err) if cfg!(debug_assertions) => {
(StatusCode::INTERNAL_SERVER_ERROR, Some(err.to_string()))
}
Self::Internal(_err) => (StatusCode::INTERNAL_SERVER_ERROR, None),
};
if let Some(body) = body {
return Response::builder()
.status(status)
.header(CONTENT_TYPE, "text/plain")
.body(Body::new(body))
.unwrap();
}
return Response::builder()
.status(status)
.body(Body::empty())
.unwrap();
}
}

View File

@@ -1,194 +0,0 @@
use axum::{
extract::{Json, Path, RawQuery, State},
routing::get,
Router,
};
use base64::prelude::*;
use std::collections::HashMap;
pub mod error;
pub mod query_api;
pub use error::QueryError;
pub use query_api::QueryApi;
use crate::auth::User;
use crate::config::proto::QueryApiParameterType;
use crate::records::sql_to_json::rows_to_json_arrays;
use crate::AppState;
pub(crate) fn router() -> Router<AppState> {
return Router::new().route("/:name", get(query_handler));
}
pub async fn query_handler(
State(state): State<AppState>,
Path(api_name): Path<String>,
RawQuery(query): RawQuery,
user: Option<User>,
) -> Result<Json<serde_json::Value>, QueryError> {
use QueryError as E;
let Some(api) = state.lookup_query_api(&api_name) else {
return Err(E::ApiNotFound);
};
let virtual_table_name = api.virtual_table_name();
let mut query_params: HashMap<String, String> = match query {
Some(ref query) => form_urlencoded::parse(query.as_bytes())
.map(|(k, v)| (k.to_string(), v.to_string()))
.collect(),
None => HashMap::new(),
};
let mut params: Vec<(String, trailbase_sqlite::Value)> = vec![];
for (name, typ) in api.params() {
match query_params.remove(name) {
Some(value) => match *typ {
QueryApiParameterType::Text => {
params.push((
format!(":{name}"),
trailbase_sqlite::Value::Text(value.clone()),
));
}
QueryApiParameterType::Blob => {
params.push((
format!(":{name}"),
trailbase_sqlite::Value::Blob(
BASE64_URL_SAFE
.decode(value)
.map_err(|_err| E::BadRequest("not b64"))?,
),
));
}
QueryApiParameterType::Real => {
params.push((
format!(":{name}"),
trailbase_sqlite::Value::Real(
value
.parse::<f64>()
.map_err(|_err| E::BadRequest("expected f64"))?,
),
));
}
QueryApiParameterType::Integer => {
params.push((
format!(":{name}"),
trailbase_sqlite::Value::Integer(
value
.parse::<i64>()
.map_err(|_err| E::BadRequest("expected i64"))?,
),
));
}
},
None => {
params.push((format!(":{name}"), trailbase_sqlite::Value::Null));
}
};
}
if !query_params.is_empty() {
return Err(E::BadRequest("invalid query param"));
}
api.check_api_access(&params, user.as_ref()).await?;
const LIMIT: usize = 128;
let response_rows = state
.conn()
.query(
&format!(
"SELECT * FROM {virtual_table_name}({placeholders}) WHERE TRUE LIMIT {LIMIT}",
placeholders = params
.iter()
.map(|e| e.0.as_str())
.collect::<Vec<_>>()
.join(", ")
),
params,
)
.await?;
let (json_rows, columns) =
rows_to_json_arrays(response_rows, LIMIT).map_err(|err| E::Internal(err.into()))?;
let Some(columns) = columns else {
return Err(E::Internal("Missing column mapping".into()));
};
// Turn the list of lists into an array of row-objects.
let rows = serde_json::Value::Array(
json_rows
.into_iter()
.map(|row| {
return serde_json::Value::Object(
row
.into_iter()
.enumerate()
.map(|(idx, value)| (columns.get(idx).unwrap().name.clone(), value))
.collect(),
);
})
.collect(),
);
return Ok(Json(rows));
}
#[cfg(test)]
mod test {
use super::*;
use axum::extract::{Json, Path, RawQuery, State};
use crate::app_state::*;
use crate::config::proto::{
QueryApiAcl, QueryApiConfig, QueryApiParameter, QueryApiParameterType,
};
#[tokio::test]
async fn test_query_api() {
let state = test_state(None).await.unwrap();
let conn = state.conn();
conn
.execute(
"CREATE VIRTUAL TABLE test_vtable USING define((SELECT $1 AS value))",
(),
)
.await
.unwrap();
let mut config = state.get_config();
config.query_apis.push(QueryApiConfig {
name: Some("test".to_string()),
virtual_table_name: Some("test_vtable".to_string()),
params: vec![QueryApiParameter {
name: Some("param0".to_string()),
r#type: Some(QueryApiParameterType::Text.into()),
}],
acl: Some(QueryApiAcl::World.into()),
access_rule: None,
});
state
.validate_and_update_config(config, None)
.await
.unwrap();
let Json(response) = query_handler(
State(state),
Path("test".to_string()),
RawQuery(Some(r#"param0=test_param"#.to_string())),
None,
)
.await
.unwrap();
assert_eq!(
response,
serde_json::json!([{
"value": "test_param"
}])
);
}
}

View File

@@ -1,152 +0,0 @@
use log::*;
use std::sync::Arc;
use crate::auth::User;
use crate::config::proto::{QueryApiAcl, QueryApiConfig, QueryApiParameterType};
use crate::query::QueryError;
#[derive(Clone)]
pub struct QueryApi {
state: Arc<QueryApiState>,
}
struct QueryApiState {
conn: trailbase_sqlite::Connection,
api_name: String,
virtual_table_name: String,
params: Vec<(String, QueryApiParameterType)>,
acl: Option<QueryApiAcl>,
access_rule: Option<String>,
}
impl QueryApi {
pub fn from(conn: trailbase_sqlite::Connection, config: QueryApiConfig) -> Result<Self, String> {
return Ok(QueryApi {
state: Arc::new(QueryApiState {
conn,
api_name: config.name.ok_or("Missing name".to_string())?,
virtual_table_name: config
.virtual_table_name
.ok_or("Missing vtable name".to_string())?,
params: config
.params
.iter()
.filter_map(|a| {
return match (&a.name, a.r#type) {
(Some(name), Some(typ)) => {
if let Ok(t) = typ.try_into() {
Some((name.clone(), t))
} else {
None
}
}
_ => None,
};
})
.collect(),
acl: config.acl.and_then(|acl| acl.try_into().ok()),
access_rule: config.access_rule,
}),
});
}
#[inline]
pub fn api_name(&self) -> &str {
&self.state.api_name
}
#[inline]
pub fn virtual_table_name(&self) -> &str {
return &self.state.virtual_table_name;
}
#[inline]
pub fn params(&self) -> &Vec<(String, QueryApiParameterType)> {
return &self.state.params;
}
pub(crate) async fn check_api_access(
&self,
query_params: &[(String, trailbase_sqlite::Value)],
user: Option<&User>,
) -> Result<(), QueryError> {
let Some(acl) = self.state.acl else {
return Err(QueryError::Forbidden);
};
'acl: {
match acl {
QueryApiAcl::Undefined => break 'acl,
QueryApiAcl::World => {}
QueryApiAcl::Authenticated => {
if user.is_none() {
break 'acl;
}
}
};
match self.state.access_rule {
None => return Ok(()),
Some(ref access_rule) => {
let params_subquery = query_params
.iter()
.filter_map(|(placeholder, _value)| {
let Some(name) = placeholder.strip_prefix(":") else {
warn!("Malformed placeholder: {placeholder}");
return None;
};
return Some(format!("{placeholder} AS {name}"));
})
.collect::<Vec<_>>()
.join(", ");
let access_query = format!(
r#"
SELECT
({access_rule})
FROM
(SELECT :__user_id AS id) AS _USER_,
(SELECT {params_subquery}) AS _PARAMS_
"#,
);
let mut params = query_params.to_vec();
params.push((
":__user_id".to_string(),
user.map_or(trailbase_sqlite::Value::Null, |u| {
trailbase_sqlite::Value::Blob(u.uuid.into())
}),
));
let row = match crate::util::query_one_row(&self.state.conn, &access_query, params).await
{
Ok(row) => row,
Err(err) => {
error!("Query API access query: '{access_query}' failed: {err}");
break 'acl;
}
};
let allowed: bool = row.get(0).unwrap_or_else(|err| {
if cfg!(test) {
panic!(
"Query API access query returned NULL. Failing closed: '{access_query}'\n{err}"
);
}
warn!("RLA query returned NULL. Failing closed: '{access_query}'\n{err}");
false
});
if allowed {
return Ok(());
}
}
}
}
return Err(QueryError::Forbidden);
}
}

View File

@@ -19,7 +19,7 @@ use crate::app_state::AppState;
use crate::assets::AssetService;
use crate::auth::util::is_admin;
use crate::auth::{self, AuthError, User};
use crate::constants::{AUTH_API_PATH, HEADER_CSRF_TOKEN, QUERY_API_PATH, RECORD_API_PATH};
use crate::constants::{AUTH_API_PATH, HEADER_CSRF_TOKEN, RECORD_API_PATH};
use crate::data_dir::DataDir;
use crate::logging;
use crate::scheduler;
@@ -250,7 +250,6 @@ impl Server {
let mut router = Router::new()
// Public, stable and versioned APIs.
.nest(&format!("/{RECORD_API_PATH}"), crate::records::router())
.nest(&format!("/{QUERY_API_PATH}"), crate::query::router())
.nest(&format!("/{AUTH_API_PATH}"), auth::router())
.route("/api/healthcheck", get(healthcheck_handler));