Tables and Views
All tables and views of the exposed schema and accessible by the active database role are available for querying. They are exposed in one-level deep routes.
For instance the full contents of a table people is returned at
curl "http://localhost:3000/people"
There are no deeply/nested/routes. Each route provides OPTIONS, GET, HEAD, POST, PATCH, and DELETE verbs depending entirely on database permissions.
Note
Why not provide nested routes? Many APIs allow nesting to retrieve related information, such as /films/1/director
. We offer a more flexible mechanism (inspired by GraphQL) to embed related resources. This is covered on Resource Embedding.
Read
GET and HEAD
Using the GET method, you can retrieve tables and views rows. The default Response Format is JSON.
A HEAD method will behave identically to GET except that no response body will be returned (RFC 2616). As an optimization, the generated query won’t execute an aggregate (to avoid unnecessary data transfer).
Horizontal Filtering
You can filter result rows by adding conditions on columns. For instance, to return people aged under 13 years old:
curl "http://localhost:3000/people?age=lt.13"
You can evaluate multiple conditions on columns by adding more query string parameters. For instance, to return people who are 18 or older and are students:
curl "http://localhost:3000/people?age=gte.18&student=is.true"
Operators
These operators are available:
Abbreviation |
In PostgreSQL |
Meaning |
---|---|---|
eq |
|
equals |
gt |
|
greater than |
gte |
|
greater than or equal |
lt |
|
less than |
lte |
|
less than or equal |
neq |
|
not equal |
like |
|
LIKE operator (to avoid URL encoding you can use |
ilike |
|
ILIKE operator (to avoid URL encoding you can use |
match |
|
~ operator, see Pattern Matching |
imatch |
|
~* operator, see Pattern Matching |
in |
|
one of a list of values, e.g. |
is |
|
checking for exact equality (null,true,false,unknown) |
isdistinct |
|
not equal, treating |
fts |
|
Full-Text Search using to_tsquery |
plfts |
|
Full-Text Search using plainto_tsquery |
phfts |
|
Full-Text Search using phraseto_tsquery |
wfts |
|
Full-Text Search using websearch_to_tsquery |
cs |
|
contains e.g. |
cd |
|
contained in e.g. |
ov |
|
overlap (have points in common), e.g. |
sl |
|
strictly left of, e.g. |
sr |
|
strictly right of |
nxr |
|
does not extend to the right of, e.g. |
nxl |
|
does not extend to the left of |
adj |
|
is adjacent to, e.g. |
not |
|
negates another operator, see Logical operators |
or |
|
logical |
and |
|
logical |
all |
|
comparison matches all the values in the list, see Operator Modifiers |
any |
|
comparison matches any value in the list, see Operator Modifiers |
For more complicated filters you will have to create a new view in the database, or use a function. For instance, here’s a view to show “today’s stories” including possibly older pinned stories:
CREATE VIEW fresh_stories AS
SELECT *
FROM stories
WHERE pinned = true
OR published > now() - interval '1 day'
ORDER BY pinned DESC, published DESC;
The view will provide a new endpoint:
curl "http://localhost:3000/fresh_stories"
Logical operators
Multiple conditions on columns are evaluated using AND
by default, but you can combine them using OR
with the or
operator. For example, to return people under 18 or over 21:
curl "http://localhost:3000/people?or=(age.lt.18,age.gt.21)"
To negate any operator, you can prefix it with not
like ?a=not.eq.2
or ?not.and=(a.gte.0,a.lte.100)
.
You can also apply complex logic to the conditions:
# curl "http://localhost:3000/people?grade=gte.90&student=is.true&or=(age.eq.14,not.and(age.gte.11,age.lte.17))"
curl --get "http://localhost:3000/people" \
-d "grade=gte.90" \
-d "student=is.true" \
-d "or=(age.eq.14,not.and(age.gte.11,age.lte.17))"
If the filter value has a reserved character, then you need to wrap it in double quotes:
curl -g 'http://localhost:3000/survey?or=(age_range.adj."[18,21)",age_range.cs."[30,35]")'
Operator Modifiers
You may further simplify the logic using the any/all
modifiers of eq,like,ilike,gt,gte,lt,lte,match,imatch
.
For instance, to avoid repeating the same column for or
, use any
to get people with last names that start with O or P:
curl -g "http://localhost:3000/people?last_name=like(any).{O*,P*}"
In a similar way, you can use all
to avoid repeating the same column for and
. To get the people with last names that start with O and end with n:
curl -g "http://localhost:3000/people?last_name=like(all).{O*,*n}"
Pattern Matching
The pattern-matching operators (like
, ilike
, match
, imatch
) exist to support filtering data using patterns instead of concrete strings, as described in the PostgreSQL docs.
To ensure best performance on larger data sets, an appropriate index should be used and even then, it depends on the pattern value and actual data statistics whether an existing index will be used by the query planner or not.
Full-Text Search
The fts
filter mentioned above has a number of options to support flexible textual queries, namely the choice of plain vs phrase search and the language used for stemming. Suppose that tsearch
is a table with column my_tsv
, of type tsvector. The following examples illustrate the possibilities.
curl "http://localhost:3000/tsearch?my_tsv=fts(french).amusant"
curl "http://localhost:3000/tsearch?my_tsv=plfts.The%20Fat%20Cats"
curl "http://localhost:3000/tsearch?my_tsv=not.phfts(english).The%20Fat%20Cats"
curl "http://localhost:3000/tsearch?my_tsv=not.wfts(french).amusant"
Using websearch_to_tsquery requires PostgreSQL of version at least 11.0 and will raise an error in earlier versions of the database.
Vertical Filtering
When certain columns are wide (such as those holding binary data), it is more efficient for the server to withhold them in a response. The client can specify which columns are required using the select
parameter.
curl "http://localhost:3000/people?select=first_name,age"
[
{"first_name": "John", "age": 30},
{"first_name": "Jane", "age": 20}
]
The default is *
, meaning all columns. This value will become more important below in Resource Embedding.
Renaming Columns
You can rename the columns by prefixing them with an alias followed by the colon :
operator.
curl "http://localhost:3000/people?select=fullName:full_name,birthDate:birth_date"
[
{"fullName": "John Doe", "birthDate": "04/25/1988"},
{"fullName": "Jane Doe", "birthDate": "01/12/1998"}
]
JSON Columns
To further reduce the data transferred, you can specify a path for a json
or jsonb
column using the arrow operators(->
or ->>
) as per the PostgreSQL docs.
CREATE TABLE people (
id int,
json_data json
);
curl "http://localhost:3000/people?select=id,json_data->>blood_type,json_data->phones"
[
{ "id": 1, "blood_type": "A-", "phones": [{"country_code": "61", "number": "917-929-5745"}] },
{ "id": 2, "blood_type": "O+", "phones": [{"country_code": "43", "number": "512-446-4988"}, {"country_code": "43", "number": "213-891-5979"}] }
]
curl "http://localhost:3000/people?select=id,json_data->phones->0->>number"
[
{ "id": 1, "number": "917-929-5745"},
{ "id": 2, "number": "512-446-4988"}
]
This also works with filters:
curl "http://localhost:3000/people?select=id,json_data->blood_type&json_data->>blood_type=eq.A-"
[
{ "id": 1, "blood_type": "A-" },
{ "id": 3, "blood_type": "A-" },
{ "id": 7, "blood_type": "A-" }
]
Note that ->>
is used to compare blood_type
as text
. To compare with an integer value use ->
:
curl "http://localhost:3000/people?select=id,json_data->age&json_data->age=gt.20"
[
{ "id": 11, "age": 25 },
{ "id": 12, "age": 30 },
{ "id": 15, "age": 35 }
]
Ordering is also supported:
curl "http://localhost:3000/people?select=id,json_data->age&order=json_data->>age.desc"
[
{ "id": 15, "age": 35 },
{ "id": 12, "age": 30 },
{ "id": 11, "age": 25 }
]
Composite / Array Columns
The arrow operators(->
, ->>
) can also be used for accessing composite fields and array elements.
CREATE TYPE coordinates (
lat decimal(8,6),
long decimal(9,6)
);
CREATE TABLE countries (
id int,
location coordinates,
languages text[]
);
# curl "http://localhost:3000/countries?select=id,location->>lat,location->>long,primary_language:languages->0&location->lat=gte.19"
curl --get "http://localhost:3000/countries" \
-d "select=id,location->>lat,location->>long,primary_language:languages->0" \
-d "location->lat=gte.19"
[
{
"id": 5,
"lat": "19.741755",
"long": "-155.844437",
"primary_language": "en"
}
]
Important
When using the ->
and ->>
operators on composite and array columns, PostgREST uses a query like to_jsonb(<col>)->'field'
. To make filtering and ordering on those nested fields use an index, the index needs to be created on the same expression, including the to_jsonb(...)
call:
CREATE INDEX ON mytable ((to_jsonb(data) -> 'identification' ->> 'registration_number'));
Casting Columns
Casting the columns is possible by suffixing them with the double colon ::
plus the desired type.
curl "http://localhost:3000/people?select=full_name,salary::text"
[
{"full_name": "John Doe", "salary": "90000.00"},
{"full_name": "Jane Doe", "salary": "120000.00"}
]
Note
To prevent invalidating Index Usage, casting on horizontal filtering is not allowed. To do this, you can use Computed Fields.
Ordering
The reserved word order
reorders the response rows. It uses a comma-separated list of columns and directions:
curl "http://localhost:3000/people?order=age.desc,height.asc"
If no direction is specified it defaults to ascending order:
curl "http://localhost:3000/people?order=age"
If you care where nulls are sorted, add nullsfirst
or nullslast
:
curl "http://localhost:3000/people?order=age.nullsfirst"
curl "http://localhost:3000/people?order=age.desc.nullslast"
You can also sort on fields of Composite / Array Columns or JSON Columns.
curl "http://localhost:3000/countries?order=location->>lat"
Index Usage
Indexes work transparently when using horizontal filtering, vertical filtering and ordering. For example, when having:
create index salary_idx on employees (salary);
We can confirm that a filter on employees uses the index by getting the Execution plan.
curl 'localhost:3000/employees?salary=eq.36000' -H "Accept: application/vnd.pgrst.plan"
Aggregate (cost=9.52..9.54 rows=1 width=144)
-> Bitmap Heap Scan on employees (cost=4.16..9.50 rows=2 width=136)
Recheck Cond: (salary = '$36,000.00'::money)
-> Bitmap Index Scan on salary_idx (cost=0.00..4.16 rows=2 width=0)
Index Cond: (salary = '$36,000.00'::money)
There we can see “Index Cond”, which confirms the index is being used by the query planner.
Insert
All tables and auto-updatable views can be modified through the API, subject to permissions of the requester’s database role.
To create a row in a database table post a JSON object whose keys are the names of the columns you would like to create. Missing properties will be set to default values when applicable.
curl "http://localhost:3000/table_name" \
-X POST -H "Content-Type: application/json" \
-d '{ "col1": "value1", "col2": "value2" }'
HTTP/1.1 201 Created
No response body will be returned by default but you can use Return Representation to get the affected resource and Resource Embedding to add related resources.
x-www-form-urlencoded
URL encoded payloads can be posted with Content-Type: application/x-www-form-urlencoded
.
curl "http://localhost:3000/people" \
-X POST -H "Content-Type: application/x-www-form-urlencoded" \
-d "name=John+Doe&age=50&weight=80"
Note
When inserting a row you must post a JSON object, not quoted JSON.
Yes
{ "a": 1, "b": 2 }
No
"{ \"a\": 1, \"b\": 2 }"
Some JavaScript libraries will post the data incorrectly if you’re not careful. For best results try one of the Client-Side Libraries built for PostgREST.
Important
It’s recommended that you use triggers instead of rules. Insertion on views with complex rules might not work out of the box with PostgREST due to its usage of CTEs. If you want to keep using rules, a workaround is to wrap the view insertion in a function and call it through the Functions as RPC interface. For more details, see this github issue.
Bulk Insert
Bulk insert works exactly like single row insert except that you provide either a JSON array of objects having uniform keys, or lines in CSV format. This not only minimizes the HTTP requests required but uses a single INSERT statement on the back-end for efficiency.
To bulk insert CSV simply post to a table route with Content-Type: text/csv
and include the names of the columns as the first row. For instance
curl "http://localhost:3000/people" \
-X POST -H "Content-Type: text/csv" \
--data-binary @- << EOF
name,age,height
J Doe,62,70
Jonas,10,55
EOF
An empty field (,,
) is coerced to an empty string and the reserved word NULL
is mapped to the SQL null value. Note that there should be no spaces between the column names and commas.
To bulk insert JSON post an array of objects having all-matching keys
curl "http://localhost:3000/people" \
-X POST -H "Content-Type: application/json" \
-d @- << EOF
[
{ "name": "J Doe", "age": 62, "height": 70 },
{ "name": "Janus", "age": 10, "height": 55 }
]
EOF
Bulk Insert with Default Values
Any missing columns in the payload will be inserted as null
values. To use the DEFAULT
column value instead, use the Prefer: missing=default
header.
Having:
create table foo (
id bigint generated by default as identity primary key
, bar text
, baz int default 100
);
A request:
curl "http://localhost:3000/foo?columns=id,bar,baz" \
-H "Content-Type: application/json" \
-H "Prefer: missing=default, return=representation" \
-d @- << EOF
[
{ "bar": "val1" },
{ "bar": "val2", "baz": 15 }
]
EOF
Will result in:
[
{ "id": 1, "bar": "val1", "baz": 100 },
{ "id": 2, "bar": "val2", "baz": 15 }
]
Specifying Columns
By using the columns
query parameter it’s possible to specify the payload keys that will be inserted and ignore the rest of the payload.
curl "http://localhost:3000/datasets?columns=source,publication_date,figure" \
-X POST -H "Content-Type: application/json" \
-d @- << EOF
{
"source": "Natural Disaster Prevention and Control",
"publication_date": "2015-09-11",
"figure": 1100,
"location": "...",
"comment": "...",
"extra": "...",
"stuff": "..."
}
EOF
In this case, only source, publication_date and figure will be inserted. The rest of the JSON keys will be ignored.
Using this also has the side-effect of being more efficient for Bulk Insert since PostgREST will not process the JSON and it’ll send it directly to PostgreSQL.
Update
To update a row or rows in a table, use the PATCH verb. Use Horizontal Filtering to specify which record(s) to update. Here is an example query setting the category
column to child for all people below a certain age.
curl "http://localhost:3000/people?age=lt.13" \
-X PATCH -H "Content-Type: application/json" \
-d '{ "category": "child" }'
Updates also support Return Representation, Resource Embedding and Vertical Filtering.
Warning
Beware of accidentally updating every row in a table. To learn to prevent that see Block Full-Table Operations.
Upsert
You can make an upsert with POST
and the Prefer: resolution=merge-duplicates
header:
curl "http://localhost:3000/products" \
-X POST -H "Content-Type: application/json" \
-H "Prefer: resolution=merge-duplicates" \
-d @- << EOF
[
{ "sku": "CL2031", "name": "Existing T-shirt", "price": 35 },
{ "sku": "CL2040", "name": "Existing Hoodie", "price": 60 },
{ "sku": "AC1022", "name": "New Cap", "price": 30 }
]
EOF
By default, upsert operates based on the primary key columns, so you must specify all of them.
You can also choose to ignore the duplicates with Prefer: resolution=ignore-duplicates
.
Upsert works best when the primary key is natural (e.g. sku
).
However, it can work with surrogate primary keys (e.g. id serial primary key
), if you also do a Bulk Insert with Default Values:
curl "http://localhost:3000/employees?colums=id,name,salary" \
-X POST -H "Content-Type: application/json" \
-H "Prefer: resolution=merge-duplicates, missing=default" \
-d @- << EOF
[
{ "id": 1, "name": "Existing employee 1", "salary": 30000 },
{ "id": 2, "name": "Existing employee 2", "salary": 42000 },
{ "name": "New employee 3", "salary": 50000 }
]
EOF
Important
After creating a table or changing its primary key, you must refresh PostgREST schema cache for upsert to work properly. To learn how to refresh the cache see Schema Cache Reloading.
On Conflict
By specifying the on_conflict
query parameter, you can make upsert work on a column(s) that has a UNIQUE constraint.
curl "http://localhost:3000/employees?on_conflict=name" \
-X POST -H "Content-Type: application/json" \
-H "Prefer: resolution=merge-duplicates" \
-d @- << EOF
[
{ "name": "Old employee 1", "salary": 40000 },
{ "name": "Old employee 2", "salary": 52000 },
{ "name": "New employee 3", "salary": 60000 }
]
EOF
PUT
A single row upsert can be done by using PUT
and filtering the primary key columns with eq
:
curl "http://localhost/employees?id=eq.4" \
-X PUT -H "Content-Type: application/json" \
-d '{ "id": 4, "name": "Sara B.", "salary": 60000 }'
All the columns must be specified in the request body, including the primary key columns.
Delete
To delete rows in a table, use the DELETE verb plus Horizontal Filtering. For instance deleting inactive users:
curl "http://localhost:3000/user?active=is.false" -X DELETE
Deletions also support Return Representation, Resource Embedding and Vertical Filtering.
curl "http://localhost:3000/user?id=eq.1" -X DELETE \
-H "Prefer: return=representation"
{"id": 1, "email": "johndoe@email.com"}
Warning
Beware of accidentally deleting all rows in a table. To learn to prevent that see Block Full-Table Operations.
Limited Update/Delete
You can limit the amount of affected rows by Update or Delete with the limit
query parameter. For this, you must add an explicit order
on a unique column(s).
curl -X PATCH "/users?limit=10&order=id&last_login=lt.2020-01-01" \
-H "Content-Type: application/json" \
-d '{ "status": "inactive" }'
curl -X DELETE "http://localhost:3000/users?limit=10&order=id&status=eq.inactive"
If your table has no unique columns, you can use the ctid system column.
Using offset
to target a different subset of rows is also possible.
Note
There is no native UPDATE...LIMIT
or DELETE...LIMIT
support in PostgreSQL; the generated query simulates that behavior and is based on this Crunchy Data blog post.