Show example query output as:
Fork me on GitHub

Knex.js is a "batteries included" SQL query builder for Postgres, MSSQL, MySQL, MariaDB, SQLite3, Oracle, and Amazon Redshift designed to be flexible, portable, and fun to use. It features both traditional node style callbacks as well as a promise interface for cleaner async flow control, a stream interface, full featured query and schema builders, transaction support (with savepoints), connection pooling and standardized responses between different query clients and dialects.

The project is hosted on GitHub, and has a comprehensive test suite.

Knex is available for use under the MIT software license.

You can report bugs and discuss features on the GitHub issues page, add pages to the wiki or send tweets to @tgriesser.

Thanks to all of the great contributions to the project.

Special thanks to Taylor Otwell and his work on the Laravel Query Builder, from which much of the builder's code and syntax was originally derived.

Latest Release: 0.14.4 - Change Log

Current Develop — Travis Badge

Installation

Knex can be used as an SQL query builder in both Node.JS and the browser, limited to WebSQL's constraints (like the inability to drop tables or read schemas). Composing SQL queries in the browser for execution on the server is highly discouraged, as this can be the cause of serious security vulnerabilities. The browser builds outside of WebSQL are primarily for learning purposes - for example, you can pop open the console and build queries on this page using the knex object.

Node.js

The primary target environment for Knex is Node.js, you will need to install the knex library, and then install the appropriate database library: pg for PostgreSQL and Amazon Redshift, mysql for MySQL or MariaDB, sqlite3 for SQLite3, or mssql for MSSQL.

$ npm install knex --save

# Then add one of the following (adding a --save) flag:
$ npm install pg
$ npm install sqlite3
$ npm install mysql
$ npm install mysql2
$ npm install mariasql
$ npm install strong-oracle
$ npm install oracle
$ npm install mssql

Browser

Knex can be built using a JavaScript build tool such as browserify or webpack. In fact, this documentation uses a webpack build which includes knex. View source on this page to see the browser build in-action (the global knex variable).

Initializing the Library

The knex module is itself a function which takes a configuration object for Knex, accepting a few parameters. The client parameter is required and determines which client adapter will be used with the library.

var knex = require('knex')({
  client: 'mysql',
  connection: {
    host : '127.0.0.1',
    user : 'your_database_user',
    password : 'your_database_password',
    database : 'myapp_test'
  }
});

The connection options are passed directly to the appropriate database client to create the connection, and may be either an object, or a connection string:

Note: Knex's PostgreSQL client allows you to set the initial search path for each connection automatically using an additional option "searchPath" as shown below.

var pg = require('knex')({
  client: 'pg',
  connection: process.env.PG_CONNECTION_STRING,
  searchPath: ['knex', 'public'],
});

Note: When you use the SQLite3 adapter, there is a filename required, not a network connection. For example:

var knex = require('knex')({
  client: 'sqlite3',
  connection: {
    filename: "./mydb.sqlite"
  }
});

Note: The database version can be added in knex configuration, when you use the PostgreSQL adapter to connect a non-standard database.

var knex = require('knex')({
  client: 'pg',
  version: '7.2',
  connection: {
    host : '127.0.0.1',
    user : 'your_database_user',
    password : 'your_database_password',
    database : 'myapp_test'
  }
});

You can also connect via an unix domain socket, which will ignore host and port.

var knex = require('knex')({
  client: 'mysql',
  connection: {
    socketPath : '/path/to/socket.sock',
    user : 'your_database_user',
    password : 'your_database_password',
    database : 'myapp_test'
  }
});

Initializing the library should normally only ever happen once in your application, as it creates a connection pool for the current database, you should use the instance returned from the initialize call throughout your library.

Specify the client for the particular flavour of SQL you are interested in.

var pg = require('knex')({client: 'pg'});
knex('table').insert({a: 'b'}).returning('*').toString();
// "insert into "table" ("a") values ('b')"

pg('table').insert({a: 'b'}).returning('*').toString();
// "insert into "table" ("a") values ('b') returning *"

Debugging

Passing a debug: true flag on your initialization object will turn on debugging for all queries.

Pooling

The client created by the configuration initializes a connection pool, using the generic-pool library. This connection pool has a default setting of a min: 2, max: 10 for the MySQL and PG libraries, and a single connection for sqlite3 (due to issues with utilizing multiple connections on a single file). To change the config settings for the pool, pass a pool option as one of the keys in the initialize block.

Checkout the generic-pool library for more information.

var knex = require('knex')({
  client: 'mysql',
  connection: {
    host : '127.0.0.1',
    user : 'your_database_user',
    password : 'your_database_password',
    database : 'myapp_test'
  },
  pool: { min: 0, max: 7 }
});

If you ever need to explicitly teardown the connection pool, you may use knex.destroy([callback]). You may use knex.destroy by passing a callback, or by chaining as a promise, just not both.

afterCreate

afterCreate callback (rawDriverConnection, done) is called when the pool aquires a new connection from the database server. done(err, connection) callback must be called for knex to be able to decide if the connection is ok or if it should be discarded right away from the pool.

var knex = require('knex')({
  client: 'pg',
  connection: {...},
  pool: {
    afterCreate: function (conn, done) {
      // in this example we use pg driver's connection API
      conn.query('SET timezone="UTC";', function (err) {
        if (err) {
          // first query failed, return error and don't try to make next query
          done(err, conn);
        } else {
          // do the second query...
          conn.query('SELECT set_limit(0.01);', function (err) {
            // if err is not falsy, connection is discarded from pool
            // if connection aquire was triggered by a query the error is passed to query promise
            done(err, conn);
          });
        }
      });
    }
  }
});

acquireConnectionTimeout

acquireConnectionTimeout defaults to 60000ms and is used to determine how long knex should wait before throwing a timeout error when acquiring a connection is not possible. The most common cause for this is using up all the pool for transaction connections and then attempting to run queries outside of transactions while the pool is still full. The error thrown will provide information on the query the connection was for to simplify the job of locating the culprit.

var knex = require('knex')({
  client: 'pg',
  connection: {...},
  pool: {...},
  acquireConnectionTimeout: 10000
});

fetchAsString

Utilized by Oracledb. An array of types. The valid types are 'DATE', 'NUMBER' and 'CLOB'. When any column having one of the specified types is queried, the column data is returned as a string instead of the default representation.

var knex = require('knex')({
  client: 'oracledb',
  connection: {...},
  fetchAsString: [ 'number', 'clob' ]
});

Migrations

For convenience, the any migration configuration may be specified when initializing the library. Read the Migrations section for more information and a full list of configuration options.

var knex = require('knex')({
  client: 'mysql',
  connection: {
    host : '127.0.0.1',
    user : 'your_database_user',
    password : 'your_database_password',
    database : 'myapp_test'
  },
  migrations: {
    tableName: 'migrations'
  }
});

postProcessResponse

Hook for modifying returned rows, before passing them forward to user. One can do for example snake_case -> camelCase conversion for returned columns with this hook. The queryContext is only available if configured for a query builder instance via queryContext.

var knex = require('knex')({
  client: 'mysql',
  // overly simplified snake_case -> camelCase converter
  postProcessResponse: (result, queryContext) => {
    // TODO: add special case for raw results (depends on dialect)
    if (Array.isArray(result)) {
      return result.map(row => convertToCamel(row));
    } else {
      return convertToCamel(result);
    }
  }
});

wrapIdentifier

Knex supports transforming identifier names automatically to quoted versions for each dialect. For example 'Table.columnName as foo' for PostgreSQL is converted to "Table"."columnName" as "foo".

With wrapIdentifier one may override the way how identifiers are transformed. It can be used to override default functionality and for example to help doing camelCase -> snake_case conversion.

Conversion function wrapIdentifier(value, dialectImpl, context): string gets each part of the identifier as a single value, the original conversion function from the dialect implementation and the queryContext, which is only available if configured for a query builder instance via builder.queryContext, and for schema builder instances via schema.queryContext or table.queryContext. For example, with the query builder, knex('table').withSchema('foo').select('table.field as otherName').where('id', 1) will call wrapIdentifier converter for following values 'table', 'foo', 'table', 'field', 'otherName' and 'id'.

var knex = require('knex')({
  client: 'mysql',
  // overly simplified camelCase -> snake_case converter
  wrapIdentifier: (value, origImpl, queryContext) => origImpl(convertToSnakeCase(value))
});

Knex Query Builder

The heart of the library, the knex query builder is the interface used for building and executing standard SQL queries, such as select, insert, update, delete.

Identifier Syntax

In many places in APIs identifiers like table name or column name can be passed to methods.

Most commonly one needs just plain tableName.columnName, tableName or columnName, but in many cases one also needs to pass an alias how that identifier is referred later on in the query.

There are two ways to declare an alias for identifier. One can directly give as aliasName prefix for the identifier or oen can pass an object { aliasName: 'identifierName' }.

If in the object has multiple aliases { alias1: 'identifier1', alias2: 'identifier2' }, then all the aliased identifiers are expanded to comma separated list.

NOTE: identifier syntax has no place for selecting schema, so if you are doing schemaName.tableName, query might be rendered wrong. Use .withSchema('schemaName') instead.

knex({ a: 'table', b: 'table' })
.select({
  aTitle: 'a.title',
  bTitle: 'b.title'
})
.whereRaw('?? = ??', ['a.column_1', 'b.column_2'])
Outputs:
select `a`.`title` as `aTitle`, `b`.`title` as `bTitle` from `table` as `a`, `table` as `b` where `a`.`column_1` = `b`.`column_2`
knexknex(tableName, options={only: boolean}) / knex.[methodName]

The query builder starts off either by specifying a tableName you wish to query against, or by calling any method directly on the knex object. This kicks off a jQuery-like chain, with which you can call additional query builder methods as needed to construct the query, eventually calling any of the interface methods, to either convert toString, or execute the query with a promise, callback, or stream. Optional second argument for passing options: only: if true, the ONLY keyword is used before the tableName to discard inheriting tables' data. *NOTE: only supported in PostgreSQL for now.

timeout.timeout(ms, options={cancel: boolean})

Sets a timeout for the query and will throw a TimeoutError if the timeout is exceeded. The error contains information about the query, bindings, and the timeout that was set. Useful for complex queries that you want to make sure are not taking too long to execute. Optional second argument for passing options: cancel: if true, cancel query if timeout is reached. *NOTE: only supported in MySQL and MariaDB for now.

knex.select().from('books').timeout(1000)
Outputs:
select * from `books`
knex.select().from('books').timeout(1000, {cancel: true}) // MySQL and MariaDB only
Outputs:
select * from `books`
select.select([*columns])

Creates a select query, taking an optional array of columns for the query, eventually defaulting to * if none are specified when the query is built. The response of a select call will resolve with an array of objects selected from the database.

knex.select('title', 'author', 'year').from('books')
Outputs:
select `title`, `author`, `year` from `books`
knex.select().table('books')
Outputs:
select * from `books`
as.as(name)

Allows for aliasing a subquery, taking the string you wish to name the current query. If the query is not a sub-query, it will be ignored.

knex.avg('sum_column1').from(function() {
  this.sum('column1 as sum_column1').from('t1').groupBy('column1').as('t1')
}).as('ignored_alias')
Outputs:
select avg(`sum_column1`) from (select sum(`column1`) as `sum_column1` from `t1` group by `column1`) as `t1`
column.column(columns)

Specifically set the columns to be selected on a select query, taking an array, an object or a list of column names. Passing an object will automatically alias the columns with the given keys.

knex.column('title', 'author', 'year').select().from('books')
Outputs:
select `title`, `author`, `year` from `books`
knex.column(['title', 'author', 'year']).select().from('books')
Outputs:
select `title`, `author`, `year` from `books`
knex.column('title', {by: 'author'}, 'year').select().from('books')
Outputs:
select `title`, `author` as `by`, `year` from `books`
from.from([tableName], options={only: boolean})

Specifies the table used in the current query, replacing the current table name if one has already been specified. This is typically used in the sub-queries performed in the advanced where or union methods. Optional second argument for passing options: only: if true, the ONLY keyword is used before the tableName to discard inheriting tables' data. *NOTE: only supported in PostgreSQL for now.

knex.select('*').from('users')
Outputs:
select * from `users`
with.with(alias, function|raw)

Add a "with" clause to the query. "With" clauses are supported by PostgreSQL, Oracle, SQLite3 and MSSQL.

knex.with('with_alias', knex.raw('select * from "books" where "author" = ?', 'Test')).select('*').from('with_alias')
Outputs:
with `with_alias` as (select * from "books" where "author" = 'Test') select * from `with_alias`
knex.with('with_alias', (qb) => {
  qb.select('*').from('books').where('author', 'Test')
}).select('*').from('with_alias')
Outputs:
with `with_alias` as (select * from `books` where `author` = 'Test') select * from `with_alias`
withSchema.withSchema([schemaName])

Specifies the schema to be used as prefix of table name.

knex.withSchema('public').select('*').from('users')
Outputs:
select * from `public`.`users`

Where Clauses

Several methods exist to assist in dynamic where clauses. In many places functions may be used in place of values, constructing subqueries. In most places existing knex queries may be used to compose sub-queries, etc. Take a look at a few of the examples for each method for instruction on use:

Important: Supplying knex with an undefined value to any of the where functions will cause knex to throw an error during sql compilation. This is both for yours and our sake. Knex cannot know what to do with undefined values in a where clause, and generally it would be a programmatic error to supply one to begin with. The error will throw a message containing the type of query and the compiled query-string. Example:

knex('accounts')
.where('login', undefined)
.select()
.toSQL()
Error:
Undefined binding(s) detected when compiling SELECT query: select * from `accounts` where `login` = ?
where.where(~mixed~)

Object Syntax:

knex('users').where({
  first_name: 'Test',
  last_name:  'User'
}).select('id')
Outputs:
select `id` from `users` where `first_name` = 'Test' and `last_name` = 'User'

Key, Value:

knex('users').where('id', 1)
Outputs:
select * from `users` where `id` = 1

Grouped Chain:

knex('users').where(function() {
  this.where('id', 1).orWhere('id', '>', 10)
}).orWhere({name: 'Tester'})
Outputs:
select * from `users` where (`id` = 1 or `id` > 10) or (`name` = 'Tester')

Operator:

knex('users').where('columnName', 'like', '%rowlikeme%')
Outputs:
select * from `users` where `columnName` like '%rowlikeme%'

The above query demonstrates the common use case of returning all users for which a specific pattern appears within a designated column.

knex('users').where('votes', '>', 100)
Outputs:
select * from `users` where `votes` > 100
var subquery = knex('users').where('votes', '>', 100).andWhere('status', 'active').orWhere('name', 'John').select('id');

knex('accounts').where('id', 'in', subquery)
Outputs:
select * from `accounts` where `id` in (select `id` from `users` where `votes` > 100 and `status` = 'active' or `name` = 'John')

.orWhere with an object automatically wraps the statement and creates an or (and - and - and) clause

knex('users').where('id', 1).orWhere({votes: 100, user: 'knex'})
Outputs:
select * from `users` where `id` = 1 or (`votes` = 100 and `user` = 'knex')
whereNot.whereNot(~mixed~)

Object Syntax:

knex('users').whereNot({
  first_name: 'Test',
  last_name:  'User'
}).select('id')
Outputs:
select `id` from `users` where not `first_name` = 'Test' and not `last_name` = 'User'

Key, Value:

knex('users').whereNot('id', 1)
Outputs:
select * from `users` where not `id` = 1

Grouped Chain:

knex('users').whereNot(function() {
  this.where('id', 1).orWhereNot('id', '>', 10)
}).orWhereNot({name: 'Tester'})
Outputs:
select * from `users` where not (`id` = 1 or not `id` > 10) or not `name` = 'Tester'

Operator:

knex('users').whereNot('votes', '>', 100)
Outputs:
select * from `users` where not `votes` > 100

CAVEAT: WhereNot is not suitable for "in" and "between" type subqueries. You should use "not in" and "not between" instead.

var subquery = knex('users')
  .whereNot('votes', '>', 100)
  .andWhere('status', 'active')
  .orWhere('name', 'John')
  .select('id');

knex('accounts').where('id', 'not in', subquery)
Outputs:
select * from `accounts` where `id` not in (select `id` from `users` where not `votes` > 100 and `status` = 'active' or `name` = 'John')
whereIn.whereIn(column, array|callback|builder) / .orWhereIn

Shorthand for .where('id', 'in', obj), the .whereIn and .orWhereIn methods add a "where in" clause to the query.

knex.select('name').from('users')
.whereIn('id', [1, 2, 3])
.orWhereIn('id', [4, 5, 6])
Outputs:
select `name` from `users` where `id` in (1, 2, 3) or `id` in (4, 5, 6)
knex.select('name').from('users')
.whereIn('account_id', function() {
  this.select('id').from('accounts');
})
Outputs:
select `name` from `users` where `account_id` in (select `id` from `accounts`)
var subquery = knex.select('id').from('accounts');

knex.select('name').from('users')
  .whereIn('account_id', subquery)
Outputs:
select `name` from `users` where `account_id` in (select `id` from `accounts`)
whereNotIn.whereNotIn(column, array|callback|builder) / .orWhereNotIn
knex('users').whereNotIn('id', [1, 2, 3])
Outputs:
select * from `users` where `id` not in (1, 2, 3)
knex('users').where('name', 'like', '%Test%').orWhereNotIn('id', [1, 2, 3])
Outputs:
select * from `users` where `name` like '%Test%' or `id` not in (1, 2, 3)
whereNull.whereNull(column) / .orWhereNull
knex('users').whereNull('updated_at')
Outputs:
select * from `users` where `updated_at` is null
whereNotNull.whereNotNull(column) / .orWhereNotNull
knex('users').whereNotNull('created_at')
Outputs:
select * from `users` where `created_at` is not null
whereExists.whereExists(builder | callback) / .orWhereExists
knex('users').whereExists(function() {
  this.select('*').from('accounts').whereRaw('users.account_id = accounts.id');
})
Outputs:
select * from `users` where exists (select * from `accounts` where users.account_id = accounts.id)
knex('users').whereExists(knex.select('*').from('accounts').whereRaw('users.account_id = accounts.id'))
Outputs:
select * from `users` where exists (select * from `accounts` where users.account_id = accounts.id)
whereNotExists.whereNotExists(builder | callback) / .orWhereNotExists
knex('users').whereNotExists(function() {
  this.select('*').from('accounts').whereRaw('users.account_id = accounts.id');
})
Outputs:
select * from `users` where not exists (select * from `accounts` where users.account_id = accounts.id)
knex('users').whereNotExists(knex.select('*').from('accounts').whereRaw('users.account_id = accounts.id'))
Outputs:
select * from `users` where not exists (select * from `accounts` where users.account_id = accounts.id)
whereBetween.whereBetween(column, range) / .orWhereBetween
knex('users').whereBetween('votes', [1, 100])
Outputs:
select * from `users` where `votes` between 1 and 100
whereNotBetween.whereNotBetween(column, range) / .orWhereNotBetween
knex('users').whereNotBetween('votes', [1, 100])
Outputs:
select * from `users` where `votes` not between 1 and 100
whereRaw.whereRaw(query, [bindings])

Convenience helper for .where(knex.raw(query)).

knex('users').whereRaw('id = ?', [1])
Outputs:
select * from `users` where id = 1

Join Methods

Several methods are provided which assist in building joins.

join.join(table, first, [operator], second)

The join builder can be used to specify joins between tables, with the first argument being the joining table, the next three arguments being the first join column, the join operator and the second join column, respectively.

knex('users')
.join('contacts', 'users.id', '=', 'contacts.user_id')
.select('users.id', 'contacts.phone')
Outputs:
select `users`.`id`, `contacts`.`phone` from `users` inner join `contacts` on `users`.`id` = `contacts`.`user_id`
knex('users')
.join('contacts', 'users.id', 'contacts.user_id')
.select('users.id', 'contacts.phone')
Outputs:
select `users`.`id`, `contacts`.`phone` from `users` inner join `contacts` on `users`.`id` = `contacts`.`user_id`

For grouped joins, specify a function as the second argument for the join query, and use on with orOn or andOn to create joins that are grouped with parentheses.

knex.select('*').from('users').join('accounts', function() {
  this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` inner join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`

For nested join statements, specify a function as first argument of on, orOn or andOn

knex.select('*').from('users').join('accounts', function() {
  this.on(function() {
    this.on('accounts.id', '=', 'users.account_id')
    this.orOn('accounts.owner_id', '=', 'users.id')
  })
})
Outputs:
select * from `users` inner join `accounts` on (`accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`)

It is also possible to use an object to represent the join syntax.

knex.select('*').from('users').join('accounts', {'accounts.id': 'users.account_id'})
Outputs:
select * from `users` inner join `accounts` on `accounts`.`id` = `users`.`account_id`

If you need to use a literal value (string, number, or boolean) in a join instead of a column, use knex.raw.

knex.select('*').from('users').join('accounts', 'accounts.type', knex.raw('?', ['admin']))
Outputs:
select * from `users` inner join `accounts` on `accounts`.`type` = 'admin'
innerJoin.innerJoin(table, ~mixed~)
knex.from('users').innerJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` inner join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.table('users').innerJoin('accounts', 'users.id', '=', 'accounts.user_id')
Outputs:
select * from `users` inner join `accounts` on `users`.`id` = `accounts`.`user_id`
knex('users').innerJoin('accounts', function() {
  this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` inner join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
leftJoin.leftJoin(table, ~mixed~)
knex.select('*').from('users').leftJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` left join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').leftJoin('accounts', function() {
  this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` left join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
leftOuterJoin.leftOuterJoin(table, ~mixed~)
knex.select('*').from('users').leftOuterJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` left outer join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').leftOuterJoin('accounts', function() {
  this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` left outer join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
rightJoin.rightJoin(table, ~mixed~)
knex.select('*').from('users').rightJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` right join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').rightJoin('accounts', function() {
  this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` right join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
rightOuterJoin.rightOuterJoin(table, ~mixed~)
knex.select('*').from('users').rightOuterJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` right outer join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').rightOuterJoin('accounts', function() {
  this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` right outer join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
fullOuterJoin.fullOuterJoin(table, ~mixed~)
knex.select('*').from('users').fullOuterJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` full outer join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').fullOuterJoin('accounts', function() {
  this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` full outer join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
crossJoin.crossJoin(table, ~mixed~)

Cross join conditions are only supported in MySQL and SQLite3. For join conditions rather use innerJoin.

knex.select('*').from('users').crossJoin('accounts')
Outputs:
select * from `users` cross join `accounts`
knex.select('*').from('users').crossJoin('accounts', 'users.id', 'accounts.user_id')
Outputs:
select * from `users` cross join `accounts` on `users`.`id` = `accounts`.`user_id`
knex.select('*').from('users').crossJoin('accounts', function() {
  this.on('accounts.id', '=', 'users.account_id').orOn('accounts.owner_id', '=', 'users.id')
})
Outputs:
select * from `users` cross join `accounts` on `accounts`.`id` = `users`.`account_id` or `accounts`.`owner_id` = `users`.`id`
joinRaw.joinRaw(sql, [bindings])
knex.select('*').from('accounts').joinRaw('natural full join table1').where('id', 1)
Outputs:
select * from `accounts` natural full join table1 where `id` = 1
knex.select('*').from('accounts').join(knex.raw('natural full join table1')).where('id', 1)
Outputs:
select * from `accounts` inner join natural full join table1 where `id` = 1

OnClauses

onIn.onIn(column, values)

Adds a onIn clause to the query.

knex.select('*').from('users').join('contacts', function() {
  this.on('users.id', '=', 'contacts.id').onIn('contacts.id', [7, 15, 23, 41])
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and `contacts`.`id` in (7, 15, 23, 41)
onNotIn.onNotIn(column, values)

Adds a onNotIn clause to the query.

knex.select('*').from('users').join('contacts', function() {
  this.on('users.id', '=', 'contacts.id').onNotIn('contacts.id', [7, 15, 23, 41])
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and `contacts`.`id` not in (7, 15, 23, 41)
onNull.onNull(column)

Adds a onNull clause to the query.

knex.select('*').from('users').join('contacts', function() {
  this.on('users.id', '=', 'contacts.id').onNull('contacts.email')
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and `contacts`.`email` is null
onNotNull.onNotNull(column)

Adds a onNotNull clause to the query.

knex.select('*').from('users').join('contacts', function() {
  this.on('users.id', '=', 'contacts.id').onNotNull('contacts.email')
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and `contacts`.`email` is not null
onExists.onExists(builder | callback)

Adds a onExists clause to the query.

knex.select('*').from('users').join('contacts', function() {
  this.on('users.id', '=', 'contacts.id').onExists(function() {
    this.select('*').from('accounts').whereRaw('users.account_id = accounts.id');
  })
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and exists (select * from `accounts` where users.account_id = accounts.id)
onNotExists.onNotExists(builder | callback)

Adds a onNotExists clause to the query.

knex.select('*').from('users').join('contacts', function() {
  this.on('users.id', '=', 'contacts.id').onNotExists(function() {
    this.select('*').from('accounts').whereRaw('users.account_id = accounts.id');
  })
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and not exists (select * from `accounts` where users.account_id = accounts.id)
onBetween.onBetween(column, range)

Adds a onBetween clause to the query.

knex.select('*').from('users').join('contacts', function() {
  this.on('users.id', '=', 'contacts.id').onBetween('contacts.id', [5, 30])
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and `contacts`.`id` between 5 and 30
onNotBetween.onNotBetween(column, range)

Adds a onNotBetween clause to the query.

knex.select('*').from('users').join('contacts', function() {
  this.on('users.id', '=', 'contacts.id').onNotBetween('contacts.id', [5, 30])
})
Outputs:
select * from `users` inner join `contacts` on `users`.`id` = `contacts`.`id` and `contacts`.`id` not between 5 and 30

ClearClauses

clearSelect.clearSelect()

Clears all select clauses from the query, excluding subqueries.

knex.select('email', 'name').from('users').clearSelect()
Outputs:
select * from `users`
clearWhere.clearWhere()

Clears all where clauses from the query, excluding subqueries.

knex.select('email', 'name').from('users').where('id', 1).clearWhere()
Outputs:
select `email`, `name` from `users`
distinct.distinct()

Sets a distinct clause on the query.

// select distinct 'first_name' from customers
knex('customers')
  .distinct('first_name', 'last_name')
  .select()
Outputs:
select distinct `first_name`, `last_name` from `customers`
groupBy.groupBy(*names)

Adds a group by clause to the query.

knex('users').groupBy('count')
Outputs:
select * from `users` group by `count`
groupByRaw.groupByRaw(sql)

Adds a raw group by clause to the query.

knex.select('year', knex.raw('SUM(profit)')).from('sales').groupByRaw('year WITH ROLLUP')
Outputs:
select `year`, SUM(profit) from `sales` group by year WITH ROLLUP
orderBy.orderBy(column, [direction])

Adds an order by clause to the query.

knex('users').orderBy('name', 'desc')
Outputs:
select * from `users` order by `name` desc
orderByRaw.orderByRaw(sql)

Adds an order by raw clause to the query.

knex.select('*').from('table').orderByRaw('col DESC NULLS LAST')
Outputs:
select * from `table` order by col DESC NULLS LAST

Having Clauses

having.having(column, operator, value)

Adds a having clause to the query.

knex('users')
.groupBy('count')
.orderBy('name', 'desc')
.having('count', '>', 100)
Outputs:
select * from `users` group by `count` having `count` > 100 order by `name` desc
havingIn.havingIn(column, values)

Adds a havingIn clause to the query.

knex.select('*').from('users').havingIn('id', [5, 3, 10, 17])
Outputs:
select * from `users` having `id` in (5, 3, 10, 17)
havingNotIn.havingNotIn(column, values)

Adds a havingNotIn clause to the query.

knex.select('*').from('users').havingNotIn('id', [5, 3, 10, 17])
Outputs:
select * from `users` having `id` not in (5, 3, 10, 17)
havingNull.havingNull(column)

Adds a havingNull clause to the query.

knex.select('*').from('users').havingNull('email')
Outputs:
select * from `users` having `email` is null
havingNotNull.havingNotNull(column)

Adds a havingNotNull clause to the query.

knex.select('*').from('users').havingNotNull('email')
Outputs:
select * from `users` having `email` is not null
havingExists.havingExists(builder | callback)

Adds a havingExists clause to the query.

knex.select('*').from('users').havingExists(function() {
  this.select('*').from('accounts').whereRaw('users.account_id = accounts.id');
})
Outputs:
select * from `users` having exists (select * from `accounts` where users.account_id = accounts.id)
havingNotExists.havingNotExists(builder | callback)

Adds a havingNotExists clause to the query.

knex.select('*').from('users').havingNotExists(function() {
  this.select('*').from('accounts').whereRaw('users.account_id = accounts.id');
})
Outputs:
select * from `users` having not exists (select * from `accounts` where users.account_id = accounts.id)
havingBetween.havingBetween(column, range)

Adds a havingBetween clause to the query.

knex.select('*').from('users').havingBetween('id', [5, 10])
Outputs:
select * from `users` having `id` between 5 and 10
havingNotBetween.havingNotBetween(column, range)

Adds a havingNotBetween clause to the query.

knex.select('*').from('users').havingNotBetween('id', [5, 10])
Outputs:
select * from `users` having `id` not between 5 and 10
havingRaw.havingRaw(column, operator, value)

Adds a havingRaw clause to the query.

knex('users')
.groupBy('count')
.orderBy('name', 'desc')
.havingRaw('count > ?', [100])
Outputs:
select * from `users` group by `count` having count > 100 order by `name` desc
offset.offset(value)

Adds an offset clause to the query.

knex.select('*').from('users').offset(10)
Outputs:
select * from `users` limit 18446744073709551615 offset 10
limit.limit(value)

Adds a limit clause to the query.

knex.select('*').from('users').limit(10).offset(30)
Outputs:
select * from `users` limit 10 offset 30
union.union([*queries], [wrap])

Creates a union query, taking an array or a list of callbacks to build the union statement, with optional boolean wrap. The queries will be individually wrapped in parentheses with a true wrap parameter.

knex.select('*').from('users').whereNull('last_name').union(function() {
  this.select('*').from('users').whereNull('first_name');
})
Outputs:
select * from `users` where `last_name` is null union select * from `users` where `first_name` is null
unionAll.unionAll(query)

Creates a union all query, with the same method signature as the union method.

knex.select('*').from('users').whereNull('last_name').unionAll(function() {
  this.select('*').from('users').whereNull('first_name');
})
Outputs:
select * from `users` where `last_name` is null union all select * from `users` where `first_name` is null
insert.insert(data, [returning])

Creates an insert query, taking either a hash of properties to be inserted into the row, or an array of inserts, to be executed as a single insert command. Resolves the promise / fulfills the callback with an array containing the first insert id of the inserted model, or an array containing all inserted ids for postgresql, or a row count for Amazon Redshift.

// Returns [1] in "mysql", "sqlite", "oracle"; [] in "postgresql" unless the 'returning' parameter is set.
knex('books').insert({title: 'Slaughterhouse Five'})
Outputs:
insert into `books` (`title`) values ('Slaughterhouse Five')
// Normalizes for empty keys on multi-row insert:
knex('coords').insert([{x: 20}, {y: 30},  {x: 10, y: 20}])
Outputs:
insert into `coords` (`x`, `y`) values (20, DEFAULT), (DEFAULT, 30), (10, 20)
// Returns [2] in "mysql", "sqlite"; [2, 3] in "postgresql"
knex.insert([{title: 'Great Gatsby'}, {title: 'Fahrenheit 451'}], 'id').into('books')
Outputs:
insert into `books` (`title`) values ('Great Gatsby'), ('Fahrenheit 451')

If one prefers that undefined keys are replaced with NULL instead of DEFAULT one may give useNullAsDefault configuration parameter in knex config.

var knex = require('knex')({
  client: 'mysql',
  connection: {
    host : '127.0.0.1',
    user : 'your_database_user',
    password : 'your_database_password',
    database : 'myapp_test'
  },
  useNullAsDefault: true
});

knex('coords').insert([{x: 20}, {y: 30}, {x: 10, y: 20}])
// insert into `coords` (`x`, `y`) values (20, NULL), (NULL, 30), (10, 20)"
returning.returning(column) / .returning([column1, column2, ...])

Utilized by PostgreSQL, MSSQL, and Oracle databases, the returning method specifies which column should be returned by the insert and update methods. Passed column parameter may be a string or an array of strings. When passed in a string, makes the SQL result be reported as an array of values from the specified column. When passed in an array of strings, makes the SQL result be reported as an array of objects, each containing a single property for each of the specified columns. The returning method is not supported on Amazon Redshift.

// Returns [1]
knex('books')
  .returning('id')
  .insert({title: 'Slaughterhouse Five'})
Outputs:
insert into `books` (`title`) values ('Slaughterhouse Five')
// Returns [2] in "mysql", "sqlite"; [2, 3] in "postgresql"
knex('books')
  .returning('id')
  .insert([{title: 'Great Gatsby'}, {title: 'Fahrenheit 451'}])
Outputs:
insert into `books` (`title`) values ('Great Gatsby'), ('Fahrenheit 451')
// Returns [ { id: 1, title: 'Slaughterhouse Five' } ]
knex('books')
  .returning(['id','title'])
  .insert({title: 'Slaughterhouse Five'})
Outputs:
insert into `books` (`title`) values ('Slaughterhouse Five')
update.update(data, [returning]) / .update(key, value, [returning])

Creates an update query, taking a hash of properties or a key/value pair to be updated based on the other query constraints. Resolves the promise / fulfills the callback with the number of affected rows for the query. If a key to be updated has value undefined it is ignored.

knex('books')
.where('published_date', '<', 2000)
.update({
  status: 'archived',
  thisKeyIsSkipped: undefined
})
Outputs:
update `books` set `status` = 'archived' where `published_date` < 2000
// Returns [1] in "mysql", "sqlite", "oracle"; [] in "postgresql" unless the 'returning' parameter is set.
knex('books').update('title', 'Slaughterhouse Five')
Outputs:
update `books` set `title` = 'Slaughterhouse Five'
del / delete.del()

Aliased to del as delete is a reserved word in JavaScript, this method deletes one or more rows, based on other conditions specified in the query. Resolves the promise / fulfills the callback with the number of affected rows for the query.

knex('accounts')
.where('activated', false)
.del()
Outputs:
delete from `accounts` where `activated` = false
transacting.transacting(transactionObj)

Used by knex.transaction, the transacting method may be chained to any query and passed the object you wish to join the query as part of the transaction for.

var Promise = require('bluebird');
knex.transaction(function(trx) {
  knex('books').transacting(trx).insert({name: 'Old Books'})
    .then(function(resp) {
      var id = resp[0];
      return someExternalMethod(id, trx);
    })
    .then(trx.commit)
    .catch(trx.rollback);
})
.then(function(resp) {
  console.log('Transaction complete.');
})
.catch(function(err) {
  console.error(err);
});
forUpdate.transacting(t).forUpdate()

Dynamically added after a transaction is specified, the forUpdate adds a FOR UPDATE in PostgreSQL and MySQL during a select statement. Not supported on Amazon Redshift due to lack of table locks.

knex('tableName')
.transacting(trx)
.forUpdate()
.select('*')
Outputs:
select * from `tableName` for update
forShare.transacting(t).forShare()

Dynamically added after a transaction is specified, the forShare adds a FOR SHARE in PostgreSQL and a LOCK IN SHARE MODE for MySQL during a select statement. Not supported on Amazon Redshift due to lack of table locks.

knex('tableName')
.transacting(trx)
.forShare()
.select('*')
Outputs:
select * from `tableName` lock in share mode
count.count(column|raw)

Performs a count on the specified column. Also accepts raw expressions. Note that in Postgres, count returns a bigint type which will be a String and not a Number (more info).

knex('users').count('active')
Outputs:
select count(`active`) from `users`
knex('users').count('active as a')
Outputs:
select count(`active`) as `a` from `users`
knex('users').count(knex.raw('??', ['active']))
Outputs:
select count(`active`) from `users`

Use countDistinct to add a distinct expression inside the aggregate function.

knex('users').countDistinct('active')
Outputs:
select count(distinct `active`) from `users`
min.min(column|raw)

Gets the minimum value for the specified column. Also accepts raw expressions.

knex('users').min('age')
Outputs:
select min(`age`) from `users`
knex('users').min('age as a')
Outputs:
select min(`age`) as `a` from `users`
knex('users').min(knex.raw('??', ['age']))
Outputs:
select min(`age`) from `users`
max.max(column|raw)

Gets the maximum value for the specified column. Also accepts raw expressions.

knex('users').max('age')
Outputs:
select max(`age`) from `users`
knex('users').max('age as a')
Outputs:
select max(`age`) as `a` from `users`
knex('users').max(knex.raw('??', ['age']))
Outputs:
select max(`age`) from `users`
sum.sum(column|raw)

Retrieve the sum of the values of a given column. Also accepts raw expressions.

knex('users').sum('products')
Outputs:
select sum(`products`) from `users`
knex('users').sum('products as p')
Outputs:
select sum(`products`) as `p` from `users`
knex('users').sum(knex.raw('??', ['products']))
Outputs:
select sum(`products`) from `users`

Use sumDistinct to add a distinct expression inside the aggregate function.

knex('users').sumDistinct('products')
Outputs:
select sum(distinct `products`) from `users`
avg.avg(column|raw)

Retrieve the average of the values of a given column. Also accepts raw expressions.

knex('users').avg('age')
Outputs:
select avg(`age`) from `users`
knex('users').avg('age as a')
Outputs:
select avg(`age`) as `a` from `users`
knex('users').avg(knex.raw('??', ['age']))
Outputs:
select avg(`age`) from `users`

Use avgDistinct to add a distinct expression inside the aggregate function.

knex('users').avgDistinct('age')
Outputs:
select avg(distinct `age`) from `users`
increment.increment(column, amount)

Increments a column value by the specified amount.

knex('accounts')
.where('userid', '=', 1)
.increment('balance', 10)
Outputs:
update `accounts` set `balance` = `balance` + 10 where `userid` = 1
decrement.decrement(column, amount)

Decrements a column value by the specified amount.

knex('accounts').where('userid', '=', 1).decrement('balance', 5)
Outputs:
update `accounts` set `balance` = `balance` - 5 where `userid` = 1
truncate.truncate()

Truncates the current table.

knex('accounts').truncate()
Outputs:
truncate `accounts`
pluck.pluck(id)

This will pluck the specified column from each row in your results, yielding a promise which resolves to the array of values selected.

knex.table('users').pluck('id').then(function(ids) { console.log(ids); });
first.first([columns])

Similar to select, but only retrieves & resolves with the first record from the query.

knex.table('users').first('id', 'name').then(function(row) { console.log(row); });
clone.clone()

Clones the current query chain, useful for re-using partial query snippets in other queries without mutating the original.

modify.modify(fn, *arguments)

Allows encapsulating and re-using query snippets and common behaviors as functions. The callback function should receive the query builder as its first argument, followed by the rest of the (optional) parameters passed to modify.

var withUserName = function(queryBuilder, foreignKey) {
  queryBuilder.leftJoin('users', foreignKey, 'users.id').select('users.user_name');
};
knex.table('articles').select('title', 'body').modify(withUserName, 'articles_user.id').then(function(article) {
  console.log(article.user_name);
});
columnInfo.columnInfo([columnName])

Returns an object with the column info about the current table, or an individual column if one is passed, returning an object with the following keys: defaultValue: the default value for the column type: the column type maxLength: the max length set for the column nullable: whether the column may be null

knex('users').columnInfo().then(function(info) { // ... });
debug.debug([enabled])

Overrides the global debug setting for the current query chain. If enabled is omitted, query debugging will be turned on.

connection

(incomplete) - This feature was incorrectly documented as functional.
If implemented, the method would set the db connection to use for the query without using the connection pool.

options.options()

Allows for mixing in additional options as defined by database client specific libraries:

knex('accounts as a1')
.leftJoin('accounts as a2', function() {
  this.on('a1.email', '<>', 'a2.email');
})
.select(['a1.email', 'a2.email'])
.where(knex.raw('a1.id = 1'))
.options({ nestTables: true, rowMode: 'array' })
.limit(2)
.then(...
queryContext.queryContext(context)

Allows for configuring a context to be passed to the wrapIdentifier and postProcessResponse hooks:

knex('accounts as a1')
.queryContext({ foo: 'bar' })
.select(['a1.email', 'a2.email'])

The context can be any kind of value and will be passed to the hooks without modification. However, note that objects will be shallow-cloned when a query builder instance is cloned, which means that they will contain all the properties of the original object but will not be the same object reference. This allows modifying the context for the cloned query builder instance.

Calling queryContext with no arguments will return any context configured for the query builder instance.

Transactions

Transactions are an important feature of relational databases, as they allow correct recovery from failures and keep a database consistent even in cases of system failure. All queries within a transaction are executed on the same database connection, and run the entire set of queries as a single unit of work. Any failure will mean the database will rollback any queries executed on that connection to the pre-transaction state.

Transactions are handled by passing a handler function into knex.transaction. The handler function accepts a single argument, an object which may be used in two ways:

  1. As the "promise aware" knex connection
  2. As an object passed into a query with and eventually call commit or rollback.
Consider these two examples:

var Promise = require('bluebird');

// Using trx as a query builder:
knex.transaction(function(trx) {

  var books = [
    {title: 'Canterbury Tales'},
    {title: 'Moby Dick'},
    {title: 'Hamlet'}
  ];

  return trx
    .insert({name: 'Old Books'}, 'id')
    .into('catalogues')
    .then(function(ids) {
      return Promise.map(books, function(book) {
        book.catalogue_id = ids[0];

        // Some validation could take place here.

        return trx.insert(info).into('books');
      });
    });
})
.then(function(inserts) {
  console.log(inserts.length + ' new books saved.');
})
.catch(function(error) {
  // If we get here, that means that neither the 'Old Books' catalogues insert,
  // nor any of the books inserts will have taken place.
  console.error(error);
});

And then this example:

var Promise = require('bluebird');

// Using trx as a transaction object:
knex.transaction(function(trx) {

  var books = [
    {title: 'Canterbury Tales'},
    {title: 'Moby Dick'},
    {title: 'Hamlet'}
  ];

  knex.insert({name: 'Old Books'}, 'id')
    .into('catalogues')
    .transacting(trx)
    .then(function(ids) {
      return Promise.map(books, function(book) {
        book.catalogue_id = ids[0];

        // Some validation could take place here.

        return knex.insert(info).into('books').transacting(trx);
      });
    })
    .then(trx.commit)
    .catch(trx.rollback);
})
.then(function(inserts) {
  console.log(inserts.length + ' new books saved.');
})
.catch(function(error) {
  // If we get here, that means that neither the 'Old Books' catalogues insert,
  // nor any of the books inserts will have taken place.
  console.error(error);
});

Throwing an error directly from the transaction handler function automatically rolls back the transaction, same as returning a rejected promise.

Notice that if a promise is not returned within the handler, it is up to you to ensure trx.commit, or trx.rollback are called, otherwise the transaction connection will hang.

Note that Amazon Redshift does not support savepoints in transactions.

Schema Builder

The knex.schema is a getter function, which returns a stateful object containing the query. Therefore be sure to obtain a new instance of the knex.schema for every query. These methods return promises.

withSchemaknex.schema.withSchema([schemaName])

Specifies the schema to be used when using the schema-building commands.

knex.schema.withSchema('public').createTable('users', function (table) {
  table.increments();
})
Outputs:
create table `public`.`users` (`id` int unsigned not null auto_increment primary key)
createTableknex.schema.createTable(tableName, callback)

Creates a new table on the database, with a callback function to modify the table's structure, using the schema-building commands.

knex.schema.createTable('users', function (table) {
  table.increments();
  table.string('name');
  table.timestamps();
})
Outputs:
create table `users` (`id` int unsigned not null auto_increment primary key, `name` varchar(255), `created_at` datetime, `updated_at` datetime)
renameTableknex.schema.renameTable(from, to)

Renames a table from a current tableName to another.

knex.schema.renameTable('users', 'old_users')
Outputs:
rename table `users` to `old_users`
dropTableknex.schema.dropTable(tableName)

Drops a table, specified by tableName.

knex.schema.dropTable('users')
Outputs:
drop table `users`
hasTableknex.schema.hasTable(tableName)

Checks for a table's existence by tableName, resolving with a boolean to signal if the table exists.

knex.schema.hasTable('users').then(function(exists) {
  if (!exists) {
    return knex.schema.createTable('users', function(t) {
      t.increments('id').primary();
      t.string('first_name', 100);
      t.string('last_name', 100);
      t.text('bio');
    });
  }
});
hasColumnknex.schema.hasColumn(tableName, columnName)

Checks if a column exists in the current table, resolves the promise with a boolean, true if the column exists, false otherwise.

dropTableIfExistsknex.schema.dropTableIfExists(tableName)

Drops a table conditionally if the table exists, specified by tableName.

knex.schema.dropTableIfExists('users')
Outputs:
drop table if exists `users`
tableknex.schema.table(tableName, callback)

Chooses a database table, and then modifies the table, using the Schema Building functions inside of the callback.

knex.schema.table('users', function (table) {
  table.dropColumn('name');
  table.string('first_name');
  table.string('last_name');
})
Outputs:
alter table `users` add `first_name` varchar(255), add `last_name` varchar(255); alter table `users` drop `name`
rawknex.schema.raw(statement)

Run an arbitrary sql query in the schema builder chain.

knex.schema.raw("SET sql_mode='TRADITIONAL'")
.table('users', function (table) {
  table.dropColumn('name');
  table.string('first_name');
  table.string('last_name');
})
Outputs:
SET sql_mode='TRADITIONAL'; alter table `users` add `first_name` varchar(255), add `last_name` varchar(255); alter table `users` drop `name`
queryContextknex.schema.queryContext(context)

Allows configuring a context to be passed to the wrapIdentifier hook. The context can be any kind of value and will be passed to wrapIdentifier without modification.

knex.schema.queryContext({ foo: 'bar' })
.table('users', function (table) {
  table.string('first_name');
  table.string('last_name');
})

The context configured will be passed to wrapIdentifier for each identifier that needs to be formatted, including the table and column names. However, a different context can be set for the column names via table.queryContext.

Calling queryContext with no arguments will return any context configured for the schema builder instance.

Schema Building:

dropColumntable.dropColumn(name)

Drops a column, specified by the column's name

dropColumnstable.dropColumns(*columns)

Drops multiple columns, taking a variable number of column names.

renameColumntable.renameColumn(from, to)

Renames a column from one name to another.

incrementstable.increments(name)

Adds an auto incrementing column. In PostgreSQL this is a serial; in Amazon Redshift an integer identity(1,1). This will be used as the primary key for the table. Also available is a bigIncrements if you wish to add a bigint incrementing number (in PostgreSQL bigserial).

// create table 'users' with a primary key using 'increments()'
knex.schema.createTable('users', function (table) {
  table.increments('userId');
  table.string('name');
});

// reference the 'users' primary key in new table 'posts'
knex.schema.createTable('posts', function (table) {
  table.integer('author').unsigned().notNullable();
  table.string('title', 30);
  table.string('content');

  table.foreign('author').references('userId').inTable('users');
});
integertable.integer(name)

Adds an integer column.

bigIntegertable.bigInteger(name)

In MySQL or PostgreSQL, adds a bigint column, otherwise adds a normal integer. Note that bigint data is returned as a string in queries because JavaScript may be unable to parse them without loss of precision.

texttable.text(name, [textType])

Adds a text column, with optional textType for MySql text datatype preference. textType may be mediumtext or longtext, otherwise defaults to text.

stringtable.string(name, [length])

Adds a string column, with optional length defaulting to 255.

floattable.float(column, [precision], [scale])

Adds a float column, with optional precision (defaults to 8) and scale (defaults to 2).

decimaltable.decimal(column, [precision], [scale])

Adds a decimal column, with optional precision (defaults to 8) and scale (defaults to 2). Specifying NULL as precision creates a decimal column that can store numbers of any precision and scale. (Only supported for Oracle, SQLite, Postgres)

booleantable.boolean(name)

Adds a boolean column.

datetable.date(name)

Adds a date column.

dateTimetable.dateTime(name)

Adds a dateTime column.

timetable.time(name)

Adds a time column. Not supported on Amazon Redshift.

timestamptable.timestamp(name, [standard])

Adds a timestamp column, defaults to timestamptz in PostgreSQL, unless true is passed as the second argument. For Example:

table.timestamp('created_at').defaultTo(knex.fn.now());
timestampstable.timestamps([useTimestamps], [defaultToNow])

Adds a created_at and updated_at column on the database, setting these each to dateTime types. When true is passed as the first argument a timestamp type is used. Both colums default to being not null and the current timestamp when true is passed as the second argument.

dropTimestampstable.dropTimestamps()

Drops the columns created_at and updated_at from the table, which can be created via timestamps.

binarytable.binary(name, [length])

Adds a binary column, with optional length argument for MySQL.

enum / enutable.enu(col, values)

Adds a enum column, (aliased to enu, as enum is a reserved word in JavaScript). Implemented as unchecked varchar(255) on Amazon Redshift. Note that the second argument is an array of values. Example:

table.enu('column', ['value1', 'value2'])
jsontable.json(name)

Adds a json column, using the built-in json type in postgresql, defaulting to a text column in older versions of postgresql or in unsupported databases. Note that when setting an array (or a value that could be an array) as the value of a json or jsonb column, you should use JSON.stringify() to convert your value to a string prior to passing it to the query builder, e.g.

knex.table('users')
.where({id: 1})
.update({json_data: JSON.stringify(mightBeAnArray)});

This is because postgresql has a native array type which uses a syntax incompatible with json; knex has no way of knowing which syntax to use, and calling JSON.stringify() forces json-style syntax.

jsonbtable.jsonb(name)

Adds a jsonb column. Works similar to table.json(), but uses native jsonb type if possible.

uuidtable.uuid(name)

Adds a uuid column - this uses the built-in uuid type in postgresql, and falling back to a char(36) in other databases.

commenttable.comment(value)

Sets the comment for a table.

enginetable.engine(val)

Sets the engine for the database table, only available within a createTable call, and only applicable to MySQL.

charsettable.charset(val)

Sets the charset for the database table, only available within a createTable call, and only applicable to MySQL.

collatetable.collate(val)

Sets the collation for the database table, only available within a createTable call, and only applicable to MySQL.

inheritstable.inherits(val)

Sets the tables that this table inherits, only available within a createTable call, and only applicable to PostgreSQL.

specificTypetable.specificType(name, type)

Sets a specific type for the column creation, if you'd like to add a column type that isn't supported here.

indextable.index(columns, [indexName], [indexType])

Adds an index to a table over the given columns. A default index name using the columns is used unless indexName is specified. The indexType can be optionally specified for PostgreSQL. Amazon Redshift does not allow creating an index.

dropIndextable.dropIndex(columns, [indexName])

Drops an index from a table. A default index name using the columns is used unless indexName is specified (in which case columns is ignored). Amazon Redshift does not allow creating an index.

uniquetable.unique(columns, [indexName])

Adds an unique index to a table over the given columns. A default index name using the columns is used unless indexName is specified.

knex.schema.alterTable('users', function(t) {
  t.unique('email')
})
knex.schema.alterTable('job', function(t) {
  t.unique(['account_id', 'program_id'])
})
foreigntable.foreign(columns, [foreignKeyName])[.onDelete(statement).onUpdate(statement).withKeyName(foreignKeyName)]

Adds a foreign key constraint to a table for an existing column using table.foreign(column).references(column) or multiple columns using table.foreign(columns).references(columns). A default key name using the columns is used unless foreignKeyName is specified. You can also chain onDelete() and/or onUpdate() to set the reference option (RESTRICT, CASCADE, SET NULL, NO ACTION) for the operation. You can also chain withKeyName() to override default key name that is generated from table and column names (result is identical to specifying second parameter to function foreign()). Note that using foreign() is the same as column.references(column) but it works for existing columns.

knex.schema.table('users', function (table) {
  table.integer('user_id').unsigned()
  table.foreign('user_id').references('Items.user_id_in_items')
})
dropForeigntable.dropForeign(columns, [foreignKeyName])

Drops a foreign key constraint from a table. A default foreign key name using the columns is used unless foreignKeyName is specified (in which case columns is ignored).

dropUniquetable.dropUnique(columns, [indexName])

Drops a unique key constraint from a table. A default unique key name using the columns is used unless indexName is specified (in which case columns is ignored).

dropPrimarytable.dropPrimary([constraintName])

Drops the primary key constraint on a table. Defaults to tablename_pkey unless constraintName is specified.

queryContexttable.queryContext(context)

Allows configuring a context to be passed to the wrapIdentifier hook for formatting table builder identifiers. The context can be any kind of value and will be passed to wrapIdentifier without modification.

knex.schema.table('users', function (table) {
  table.queryContext({ foo: 'bar' });
  table.string('first_name');
  table.string('last_name');
})

This method also enables overwriting the context configured for a schema builder instance via schema.queryContext:

knex.schema.queryContext('schema context')
  .table('users', function (table) {
    table.queryContext('table context');
    table.string('first_name');
    table.string('last_name');
})

Note that it's also possible to overwrite the table builder context for any column in the table definition:

knex.schema.queryContext('schema context')
  .table('users', function (table) {
    table.queryContext('table context');
    table.string('first_name').queryContext('first_name context');
    table.string('last_name').queryContext('last_name context');
})

Calling queryContext with no arguments will return any context configured for the table builder instance.

Chainable Methods:

The following three methods may be chained on the schema building methods, as modifiers to the column.

altercolumn.alter()

Marks the column as an alter / modify, instead of the default add. Note: This only works in .alterTable() and is not supported by SQlite or Amazon Redshift. Alter is not done incrementally over older column type so if you like to add notNull and keep the old default value, the alter statement must contain both .notNull().defaultTo(1).alter(). If one just tries to add .notNull().alter() the old default value will be dropped.

knex.schema.alterTable('user', function(t) {
  t.increments().primary(); // add
  // drops previous default value from column, change type to string and add not nullable constraint
  t.string('username', 35).notNullable().alter();
  // drops both not null contraint and the default value
  t.integer('age').alter();
});
indexcolumn.index([indexName], [indexType])

Specifies a field as an index. If an indexName is specified, it is used in place of the standard index naming convention of tableName_columnName. The indexType can be optionally specified for PostgreSQL. No-op if this is chained off of a field that cannot be indexed.

primarycolumn.primary([constraintName])

  When called on a single column it will set that column as the primary key for a table.
  To create a compound primary key, pass an array of column names:
  `table.primary(['column1', 'column2'])`.
  Constraint name defaults to `tablename_pkey` unless `constraintName` is specified.
  On Amazon Redshift, all columns included in a primary key must be not nullable.

uniquecolumn.unique()

Sets the column as unique. On Amazon Redshift, this constraint is not enforced, but it is used by the query planner.

referencescolumn.references(column)

Sets the "column" that the current column references as a foreign key. "column" can either be "." syntax, or just the column name followed up with a call to inTable to specify the table.

inTablecolumn.inTable(table)

Sets the "table" where the foreign key column is located after calling column.references.

onDeletecolumn.onDelete(command)

Sets the SQL command to be run "onDelete".

onUpdatecolumn.onUpdate(command)

Sets the SQL command to be run "onUpdate".

defaultTocolumn.defaultTo(value)

Sets the default value for the column on an insert.

unsignedcolumn.unsigned()

Specifies an integer as unsigned. No-op if this is chained off of a non-integer field.

notNullablecolumn.notNullable()

Adds a not null on the current column being created.

nullablecolumn.nullable()

Default on column creation, this explicitly sets a field to be nullable.

firstcolumn.first()

Sets the column to be inserted on the first position, only used in MySQL alter tables.

aftercolumn.after(field)

Sets the column to be inserted after another, only used in MySQL alter tables.

commentcolumn.comment(value)

Sets the comment for a column.

knex.schema.createTable('accounts', function(t) {
  t.increments().primary();
  t.string('email').unique().comment('This is the email field');
});
collatecolumn.collate(collation)

Sets the collation for a column (only works in MySQL). Here is a list of all available collations: https://dev.mysql.com/doc/refman/5.5/en/charset-charsets.html

knex.schema.createTable('users', function(t) {
  t.increments();
  t.string('email').unique().collate('utf8_unicode_ci');
});

Raw

Sometimes you may need to use a raw expression in a query. Raw query object may be injected pretty much anywhere you want, and using proper bindings can ensure your values are escaped properly, preventing SQL-injection attacks.

Raw Parameter Binding:

One can paramterize sql given to knex.raw(sql, bindings). Parameters can be positional named. One can also choose if parameter should be treated as value or as sql identifier e.g. in case of 'TableName.ColumnName' reference.

knex('users')
.select(knex.raw('count(*) as user_count, status'))
.where(knex.raw(1))
.orWhere(knex.raw('status <> ?', [1]))
.groupBy('status')
Outputs:
select count(*) as user_count, status from `users` where 1 or status <> 1 group by `status`

Positional bindings ? are interpreted as values and ?? are interpreted as identifiers.

knex('users').where(knex.raw('?? = ?', ['user.name', 1]))
Outputs:
select * from `users` where `user`.`name` = 1

Named bindings such as :name are interpreted as values and :name: interpreted as identifiers. Named bindings are processed so long as the value is anything other than undefined.

knex('users')
.where(knex.raw(':name: = :thisGuy or :name: = :otherGuy or :name: = :undefinedBinding', {
  name: 'users.name',
  thisGuy: 'Bob',
  otherGuy: 'Jay',
  undefinedBinding: undefined
}))
Error:
Undefined binding(s) detected when compiling RAW query: `users`.`name` = ? or `users`.`name` = ? or `users`.`name` = :undefinedBinding

For simpler queries where one only has a single binding, .raw can accept said binding as its second parameter.

knex('users')
.where(
  knex.raw('LOWER("login") = ?', 'knex')
)
.orWhere(
  knex.raw('accesslevel = ?', 1)
)
.orWhere(
  knex.raw('updtime = ?', new Date.UTC('01-01-2016'))
)
Error:
Date.UTC is not a constructor

Note that due to ambiguity, arrays must be passed as arguments within a containing array.

knex.raw('select * from users where id in (?)', [1, 2, 3]);
// Error: Expected 3 bindings, saw 1

knex.raw('select * from users where id in (?)', [[1, 2, 3]])
Outputs:
select * from users where id in (1, 2, 3)

To prevent replacement of ? one can use the escape sequence \\?.

knex.select('*').from('users').where('id', '=', 1).whereRaw('?? \\? ?', ['jsonColumn', 'jsonKey'])
Outputs:
select * from `users` where `id` = 1 and `jsonColumn` ? 'jsonKey'

To prevent replacement of named bindings one can use the escape sequence \\:.

knex.select('*').from('users').whereRaw(":property: = '\\:value' OR \\:property: = :value", {
  property: 'name',
  value: 'Bob'
})
Outputs:
select * from `users` where `name` = ':value' OR :property: = 'Bob'

Raw Expressions:

Raw expressions are created by using knex.raw(sql, [bindings]) and passing this as a value for any value in the query chain.

knex('users')
.select(knex.raw('count(*) as user_count, status'))
.where(knex.raw(1))
.orWhere(knex.raw('status <> ?', [1]))
.groupBy('status')
Outputs:
select count(*) as user_count, status from `users` where 1 or status <> 1 group by `status`

Raw Queries:

The knex.raw may also be used to build a full query and execute it, as a standard query builder query would be executed. The benefit of this is that it uses the connection pool and provides a standard interface for the different client libraries.

knex.raw('select * from users where id = ?', [1]).then(function(resp) { ... });

Note that the response will be whatever the underlying sql library would typically return on a normal query, so you may need to look at the documentation for the base library the queries are executing against to determine how to handle the response.

Wrapped Queries:

The raw query builder also comes with a wrap method, which allows wrapping the query in a value:

var subcolumn = knex.raw('select avg(salary) from employee where dept_no = e.dept_no')
.wrap('(', ') avg_sal_dept');

knex.select('e.lastname', 'e.salary', subcolumn)
.from('employee as e')
.whereRaw('dept_no = e.dept_no')
Outputs:
select `e`.`lastname`, `e`.`salary`, (select avg(salary) from employee where dept_no = e.dept_no) avg_sal_dept from `employee` as `e` where dept_no = e.dept_no

Note that the example above be achieved more easily using the as method.

var subcolumn = knex.avg('salary')
.from('employee')
.whereRaw('dept_no = e.dept_no')
.as('avg_sal_dept');

knex.select('e.lastname', 'e.salary', subcolumn)
.from('employee as e')
.whereRaw('dept_no = e.dept_no')
Outputs:
select `e`.`lastname`, `e`.`salary`, (select avg(`salary`) from `employee` where dept_no = e.dept_no) as `avg_sal_dept` from `employee` as `e` where dept_no = e.dept_no

Utility

A collection of utilities that the knex library provides for convenience.

Batch Insert

The batchInsert utility will insert a batch of rows wrapped inside a transaction (which is automatically created unless explicitly given a transaction using transacting), at a given chunkSize.

It's primarily designed to be used when you have thousands of rows to insert into a table.

By default, the chunkSize is set to 1000.

BatchInsert also allows for returning values and supplying transactions using transacting.

var rows = [{...}, {...}];
var chunkSize = 30;
knex.batchInsert('TableName', rows, chunkSize)
  .returning('id')
  .then(function(ids) { ... })
  .catch(function(error) { ... });

knex.transaction(function(tr) {
  return knex.batchInsert('TableName', rows, chunkSize)
    .transacting(tr)
  })
  .then(function() { ... })
  .catch(function(error) { ... });

Interfaces

Knex.js provides several options to deal with query output. The following methods are present on the query builder, schema builder, and the raw builder:

Promises

Promises are the preferred way of dealing with queries in knex, as they allow you to return values from a fulfillment handler, which in turn become the value of the promise. The main benefit of promises are the ability to catch thrown errors without crashing the node app, making your code behave like a .try / .catch / .finally in synchronous code.

knex.select('name')
.from('users')
.where('id', '>', 20)
.andWhere('id', '<', 200)
.limit(10)
.offset(x)
.then(function(rows) {
  return _.pluck(rows, 'name');
})
.then(function(names) {
  return knex.select('id').from('nicknames').whereIn('nickname', names);
})
.then(function(rows) {
  console.log(rows);
})
.catch(function(error) {
  console.error(error)
});
then.then(onFulfilled, [onRejected])

Coerces the current query builder chain into a promise state, accepting the resolve and reject handlers as specified by the Promises/A+ spec. As stated in the spec, more than one call to the then method for the current query chain will resolve with the same value, in the order they were called; the query will not be executed multiple times.

knex.select('*')
.from('users')
.where({name: 'Tim'})
.then(function(rows) {
  return knex.insert({user_id: rows[0].id, name: 'Test'}, 'id').into('accounts');
})
.then(function(id) {
  console.log('Inserted Account ' + id);
})
.catch(function(error) { console.error(error); });
catch.catch(onRejected)

Coerces the current query builder into a promise state, catching any error thrown by the query, the same as calling .then(null, onRejected).

return knex.insert({id: 1, name: 'Test'}, 'id')
.into('accounts')
.catch(function(error) {
  console.error(error);
}).then(function() {
  return knex.select('*')
    .from('accounts')
    .where('id', 1);
}).then(function(rows) {
  console.log(rows[0]);
})
.catch(function(error) {
  console.error(error);
});
tap.tap(sideEffectHandler)

Executes side effects on the resolved response, ultimately returning a promise that fulfills with the original value. A thrown error or rejected promise will cause the promise to transition into a rejected state.

// Using only .then()
query.then(function(x) {
  doSideEffectsHere(x);
  return x;
});

// Using .tap()
promise.tap(doSideEffectsHere);
map.map(mapper)

A passthrough to Bluebird's map implementation with the result set.

knex.select('name').from('users').limit(10).map(function(row) {
  return row.name;
})
.then(function(names) { console.log(names); })
.catch(function(e) { console.error(e); });
reduce.reduce(reducer, [initialValue])

A passthrough to Bluebird's reduce implementation with the result set.

knex.select('name').from('users').limit(10).reduce(function(memo, row) {
  memo.names.push(row.name);
  memo.count++;
  return memo;
}, {count: 0, names: []})
.then(function(obj) { console.log(obj); })
.catch(function(e) { console.error(e); });
bind.bind(context)

A passthrough to Bluebird's bind method which sets the context value (this) for the returned promise.

knex.select('name').from('users')
.limit(10)
.bind(console)
.then(console.log)
.catch(console.error)
return.return(value)

Shorthand for calling .then(function() { return value }).

// Without return:
knex.insert(values).into('users')
  .then(function() {
    return {inserted: true};
  });

knex.insert(values).into('users').return({inserted: true});

Callbacks

asCallback.asCallback(callback)

If you'd prefer a callback interface over promises, the asCallback function accepts a standard node style callback for executing the query chain. Note that as with the then method, subsequent calls to the same query chain will return the same result.

knex.select('name').from('users')
.where('id', '>', 20)
.andWhere('id', '<', 200)
.limit(10)
.offset(x)
.asCallback(function(err, rows) {
  if (err) return console.error(err);
  knex.select('id').from('nicknames')
    .whereIn('nickname', _.pluck(rows, 'name'))
    .asCallback(function(err, rows) {
      if (err) return console.error(err);
      console.log(rows);
    });
});

Streams

Streams are a powerful way of piping data through as it comes in, rather than all at once. You can read more about streams here at substack's stream handbook. See the following for example uses of stream & pipe. If you wish to use streams with PostgreSQL, you must also install the pg-query-stream module. On an HTTP server, make sure to manually close your streams if a request is aborted.

stream.stream([options], [callback])

If called with a callback, the callback is passed the stream and a promise is returned. Otherwise, the readable stream is returned.

// Retrieve the stream:
var stream = knex.select('*').from('users').stream();
stream.pipe(writableStream);

// With options:
var stream = knex.select('*').from('users').stream({highWaterMark: 5});
stream.pipe(writableStream);

// Use as a promise:
var stream = knex.select('*').from('users')
  .where(knex.raw('id = ?', [1]))
  .stream(function(stream) {
    stream.pipe(writableStream);
  })
  .then(function() { // ... })
  .catch(function(e) { console.error(e); });
pipe.pipe(writableStream)

Pipe a stream for the current query to a writableStream.

var stream = knex.select('*').from('users').pipe(writableStream);

Events

query

A query event is fired just before a query takes place, providing data about the query, including the connection's __knexUid property and any other information about the query as described in toSQL. Useful for logging all queries throughout your application.

knex.select('*')
.from('users')
.on('query', function(data) {
  app.log(data);
})
.then(function() {
  // ...
});
query-error

A query-error event is fired when an error occurs when running a query, providing the error object and data about the query, including the connection's __knexUid property and any other information about the query as described in toSQL. Useful for logging all query errors throughout your application.

knex.select(['NonExistentColumn'])
.from('users')
.on('query-error', function(error, obj) {
  app.log(error);
})
.then(function() { // ... })
.catch(function(error) {
  // Same error object as the query-error event provides.
});
query-response

A query-response event is fired when a successful query has been run, providing the response of the query and data about the query, including the connection's __knexUid property and any other information about the query as described in toSQL, and finally the query builder used for the query.

knex.select('*')
.from('users')
.on('query-response', function(response, obj, builder) {
  // ...
})
.then(function(response) {
  // Same response as the emitted event
})
.catch(function(error) { });
start

A start event is fired right before a query-builder is compiled. Note: While this event can be used to alter a builders state prior to compilation it is not to be recommended. Future goals include ways of doing this in a different manner such as hooks.

knex.select('*')
.from('users')
.on('start', function(builder) {
  builder
  .where('IsPrivate', 0)
})
.then(function(Rows) {
  //Only contains Rows where IsPrivate = 0
})
.catch(function(error) { });

Other

toString.toString()

Returns an array of query strings filled out with the correct values based on bindings, etc. Useful for debugging, but should not be used to create queries for running them against DB.

var toStringQuery = knex.select('*').from('users').where('id', 1).toString();

// Outputs: console.log(toStringQuery); 
// select * from "users" where "id" = 1
toSQL.toSQL() and toSQL().toNative()

Returns an array of query strings filled out with the correct values based on bindings, etc. Useful for debugging and building queries for running them manually with DB driver. .toSQL().toNative() outputs object with sql string and bindings in a dialects format in the same way that knex internally sends them to unterlying DB driver.

knex.select('*').from('users')
  .where(knex.raw('id = ?', [1]))
  .toSQL()
// Outputs:
// {
//   bindings: [1],
//   method: 'select',
//   sql: 'select * from "users" where id = ?',
//   options: undefined,
//   toNative: function () {}
// }

knex.select('*').from('users')
  .where(knex.raw('id = ?', [1]))
  .toSQL().toNative()
// Outputs for postgresql dialect:
// {
//   bindings: [1],
//   sql: 'select * from "users" where id = $1',
// }

Migrations

Migrations allow for you to define sets of schema changes so upgrading a database is a breeze.

Migration CLI

The migration CLI is bundled with the knex install, and is driven by the node-liftoff module. To install globally, run:

$ npm install knex -g

Migrations use a knexfile, which specify various configuration settings for the module. To create a new knexfile, run the following:

$ knex init

# or for .coffee

$ knex init -x coffee

will create a sample knexfile.js - the file which contains our various database configurations. Once you have a knexfile.js, you can use the migration tool to create migration files to the specified directory (default migrations). Creating new migration files can be achieved by running:

$ knex migrate:make migration_name

Once you have finished writing the migrations, you can update the database matching your NODE_ENV by running:

$ knex migrate:latest

You can also pass the --env flag or set NODE_ENV to select an alternative environment:

$ knex migrate:latest --env production

# or

$ NODE_ENV=production knex migrate:latest

To rollback the last batch of migrations:

$ knex migrate:rollback

Seed files

Seed files allow you to populate your database with test or seed data independent of your migration files.

Seed CLI

To create a seed file, run:

$ knex seed:make seed_name

Seed files are created in the directory specified in your knexfile.js for the current environment. A sample seed configuration looks like:

development: {
  client: ...,
  connection: { ... },
  seeds: {
      directory: './seeds/dev'
  }
}

If no seeds.directory is defined, files are created in ./seeds. Note that the seed directory needs to be a relative path. Absolute paths are not supported (nor is it good practice).

To run seed files, execute:

$ knex seed:run

Seed files are executed in alphabetical order. Unlike migrations, every seed file will be executed when you run the command. You should design your seed files to reset tables as needed before inserting data.

knexfile.js

A knexfile.js or knexfile.coffee generally contains all of the configuration for your database. It can optionally provide different configuration for different environments. You may pass a --knexfile option to any of the command line statements to specify an alternate path to your knexfile.

Basic configuration:

module.exports = {
  client: 'pg',
  connection: process.env.DATABASE_URL || { user: 'me', database: 'my_app' }
};

Environment configuration:

module.exports = {
  development: {
    client: 'pg',
    connection: { user: 'me', database: 'my_app' }
  },
  production: { client: 'pg', connection: process.env.DATABASE_URL }
};

Custom migration:

You may provide a custom migration stub to be used in place of the default option.

module.exports = {
  client: 'pg',
  migrations: {
    stub: 'migration.stub'
  }
};

Migration API

knex.migrate is the class utilized by the knex migrations cli.

Each method takes an optional config object, which may specify the following properties:

  • directory: a relative path to the directory containing the migration files (default ./migrations)
  • extension: the file extension used for the generated migration files (default js)
  • tableName: the table name used for storing the migration state (default knex_migrations)
  • disableTransactions: don't run migrations inside transactions (default false)
  • loadExtensions: array of file extensions which knex will treat as migrations. For example, if you have typescript transpiled into javascript in the same folder, you want to execute only javascript migrations. In this case, set loadExtensions to ['.js'] (Notice the dot!) (default ['.co', '.coffee', '.eg', '.iced', '.js', '.litcoffee', '.ls', '.ts'])

Transactions in migrations

By default, each migration is run inside a transaction. Whenever needed, one can disable transactions for all migrations via the common migration config option config.disableTransactions or per-migration, via exposing a boolean property config.transaction from a migration file:

exports.up = function(knex, Promise) { /* ... */ };

exports.down = function(knex, Promise) { /* ... */ };

exports.config = { transaction: false };

The same config property can be used for enabling transaction per-migration in case the common configuration has disableTransactions: true.

makeknex.migrate.make(name, [config])

Creates a new migration, with the name of the migration being added.

latestknex.migrate.latest([config])

Runs all migrations that have not yet been run.

If you need to run something only after all migrations have finished their execution, you can do something like this:

knex.migrate.latest()
.then(function() {
  return knex.seed.run();
})
.then(function() {
  // migrations are finished
});
rollbackknex.migrate.rollback([config])

Rolls back the latest migration group.

currentVersionknex.migrate.currentVersion([config])

Retrieves and returns the current migration version, as a promise. If there aren't any migrations run yet, returns "none" as the value for the currentVersion.

Notes about locks

A lock system is there to prevent multiple processes from running the same migration batch in the same time. When a batch of migrations is about to be run, the migration system first tries to get a lock using a SELECT ... FOR UPDATE statement (preventing race conditions from happening). If it can get a lock, the migration batch will run. If it can't, it will wait until the lock is released.

Please note that if your process unfortunately crashes, the lock will have to be manually removed in order to let migrations run again. The locks are saved in a table called "tableName_lock"; it has single one column called is_locked that you need to set to 0 in order to release the lock.

Seed API

knex.seed is the class utilized by the knex seed CLI.

Each method takes an optional config object, which may specify the following properties:

  • directory: a relative path to the directory containing the seed files (default ./seeds)
  • loadExtensions: array of file extensions which knex will treat as seeds. For example, if you have typescript transpiled into javascript in the same folder, you want to execute only javascript seeds. In this case, set loadExtensions to ['.js'] (Notice the dot!) (default ['.co', '.coffee', '.eg', '.iced', '.js', '.litcoffee', '.ls', '.ts'])

Methods

makeknex.seed.make(name, [config])

Creates a new seed file, with the name of the seed file being added.

runknex.seed.run([config])

Runs all seed files for the current environment.

Support

Have questions about the library? Come join us in the #bookshelf freenode IRC channel for support on knex.js and bookshelf.js, or post an issue on Stack Overflow or in the GitHub issue tracker.

F.A.Q.

How do I help contribute?
Glad you ask! Pull requests, or feature requests, though not always implemented, are a great way to help make Knex even better than it is now. If you're looking for something specific to help out with, there's a number of unit tests that aren't implemented yet, the library could never have too many of those. If you want to submit a fix or feature, take a look at the Contributing readme in the Github and go ahead and open a ticket.

How do I debug?
Knex is beginning to make use of the debug module internally, so you can set the DEBUG environment variable to knex:* to see all debugging, or select individual namespaces DEBUG=knex:query,knex:tx to constrain a bit.

If you pass {debug: true} as one of the options in your initialize settings, you can see all of the query calls being made. Sometimes you need to dive a bit further into the various calls and see what all is going on behind the scenes. I'd recommend node-inspector, which allows you to debug code with debugger statements like you would in the browser.

At the start of your application code will catch any errors not otherwise caught in the normal promise chain handlers, which is very helpful in debugging.

How do I run the test suite?
The test suite looks for an environment variable called KNEX_TEST for the path to the database configuration. If you run the following command:

$ export KNEX_TEST='/path/to/your/knex_config.js'
$ npm test

replacing with the path to your config file, and the config file is valid, the test suite should run properly.

My tests are failing because slow DB connection and short test timeouts! How to extend test timeouts?
Sometimes, e.g. when running CI on travis, test suite's default timeout of 5 seconds might be too short. In such cases an alternative test timeout value in milliseconds can be specified using the KNEX_TEST_TIMEOUT environment variable.

$ export KNEX_TEST_TIMEOUT=30000
$ npm test

Can I use Knex outside of Node.js
Yes. While the WebSQL spec is deprecated, there is still an adapter that provides support. You will need to use a build tool like browserify or webpack for a browser build.

I found something broken with Amazon Redshift! Can you help?
Because there is no testing platform available for Amazon Redshift, be aware that it is included as a dialect but is unsupported. With that said, please file an issue if something is found to be broken that is not noted in the documentation, and we will do our best.

Change Log

0.14.419 Feb, 2018
Bug fixes:
  • containsUndefined only validate plain objects. Fixes #1898 (#2468)
  • Add warning when using .returning() in sqlite3. Fixes #1660 (#2471)
  • Throw an error if .update() results in an empty sql (#2472)
  • Removed unnecessary createTableIfNotExist and replaced with createTable (#2473)
New Features:
  • Allow calling lock procedures (such as forUpdate) outside of transaction. Fixes #2403. (#2475)
  • Added test and documentation for Event 'start' (#2488)
Test / internal changes
  • Added stress test, which uses TCP proxy to simulate flaky connection #2460
  • Removed old docker tests, new stress test setup (#2474)
  • Removed unused property __cid on the base client (#2481)
  • Changed rm to rimraf in 'npm run dev' (#2483)
  • Changed babel preset and use latest node as target when running dev (#2484)
0.14.38 Feb, 2018
Bug fixes:
  • Use tarn as pool instead of generic-pool which has been given various problems #2450
  • Fixed mysql issue where add columns failed if using both after and collate #2432
  • CLI sets exit-code 1 if the command supplied was not parseable #2358
  • Set toNative() to be not enumerable #2388
  • Use wrapIdentifier in columnInfo. fixes #2402 #2405
  • Fixed a bug when using .returning (OUTPUT) in an update query with joins in MSSQL #2399
  • Better error message when running migrations fail before even starting run migrations #2373
  • Read oracle's UV_THREADPOOL_SIZE env variable correctly #2372
  • Added decimal variable precision / scale support #2353
New Features:
  • Added queryContext to schema and query builders #2314
  • Added redshift dialect #2233
  • Added warning when one uses .createTableIfNotExist and deprecated it from docs #2458
Test / internal changes
  • Update dependencies and fix ESLint warnings accordingly #2433
  • Disable oracledb tests from non LTS nodes #2407
  • Update dependencies #2422
0.14.224 Nov, 2017
Bug fixes:
  • Fix sqlite3 truncate method to work again #2348
0.14.119 Nov, 2017
Bug fixes:
  • Fix support for multiple schema names in in postgres searchPath #2340
  • Fix create new connection to pass errors to query instead of retry loop #2336
  • Fix recognition of connections closed by server #2341
0.14.06 Nov, 2017
Breaking Changes:
  • Remove sorting of statements from update queries #2171
  • Updated allowed operator list with some missing operators and make all to lower case #2239
  • Use node-mssql 4.0.0 #2029
  • Support for enum columns to SQlite3 dialect #2055
  • Better identifier quoting in Sqlite3 #2087
  • Migration Errors - Display filename of of failed migration #2272
Other Features:
  • Post processing hook for query result #2261
  • Build native SQL where binding parameters are dialect specific #2237
  • Configuration option to allow override identifier wrapping #2217
  • Implemented select syntax: select({ alias: 'column' }) #2227
  • Allows to filter seeds and migrations by extensions #2168
  • Reconnecting after database server disconnect/reconnect + tests #2017
  • Removed filering from allowed configuration settings of mysql2 #2040
  • Allow raw expressions in query builder aggregate methods #2257
  • Throw error on non-string table comment #2126
  • Support for mysql stream query options #2301
Bug fixes:
  • Allow update queries and passing query builder to with statements #2298
  • Fix escape table name in SQLite columnInfo call #2281
  • Preventing containsUndefined from going to recursion loop #1711
  • Fix error caused by call to knex.migrate.currentVersion #2123
  • Upgraded generic-pool to 3.1.7 (did resolve some memory issues) #2208
  • Allow using NOT ILIKE operator #2195
  • Fix postgres searchPath to be case-sensitive #2172
  • Fix drop of multiple columns in sqlite3 #2107
  • Fix adding multiple columns in Oracle #2115
  • Use selected schema when dropping indices in Postgres. #2105
  • Fix hasTable for MySQL to not do partial matches #2097
  • Fix setting autoTransaction in batchInsert #2113
  • Fix connection error propagation when streaming #2199
  • Fix comments not being applied to increments columns #2243
  • Fix mssql wrong binding order of queries that combine a limit with select raw or update #2066
  • Fixed mysql alter table attributes order #2062
Test / internal changes
  • Update each out-of-date dependency according to david-dm.org #2297
  • Update v8flags to version 3.0.0 #2288
  • Update interpret version #2283
  • Fix debug output typo #2187
  • Docker CI tests #2164
  • Unit test for right/rightOuterJoin combination #2117
  • Unit test for fullOuterJoin #2118
  • Unit tests for table comment #2098
  • Test referencing non-existent column with sqlite3 #2104
  • Unit test for renaming column in postgresql #2099
  • Unit test for cross-join #2102
  • Fix incorrect parameter name #2068
Show Full Changelog
0.13.029 Apr, 2017
Breaking Changes:
  • Multiple concurrent migration runners blocks instead of throwing error when possible #1962
  • Fixed transaction promise mutation issue #1991
Other Changes:
  • Allow passing version of connected db in configuration file #1993
  • Bugfixes on batchInsert and transactions for mysql/maria #1992
  • Add fetchAsString optional parameter to oracledb dialect #1998
  • fix: escapeObject parameter order for Postgres dialect. #2003
0.12.923 Mar, 2017

Fixed unhandled exception in batchInsert when the rows to be inserted resulted in duplicate key violation #1880

0.12.815 Mar, 2017
  • Added clearSelect and clearWhere to query builder #1912
  • Properly close Postgres query streams on error #1935
  • Transactions should never reject with undefined #1970
  • Clear acquireConnectionTimeout if an error occurs when acquiring a connection #1973
0.12.717 Feb, 2017
Accidental Breaking Change:
  • Ensure that 'client' is provided in knex config object #1822
Other Changes:
  • Support custom foreign key names #1311, #1726
  • Fixed named bindings to work with queries containing :-chars #1890
  • Exposed more promise functions #1896
  • Pass rollback errors to transaction promise in mssql #1885
  • ONLY keyword support for PostgreSQL (for table inheritance) #1874
  • Fixed Mssql update with join syntax #1777
  • Replace migrations and seed for react-native packager #1813
  • Support knexfile, migration and seeds in TypeScript #1769
  • Fix float to integer conversion of decimal fields in MSSQL #1781
  • External authentication capability when using oracledb driver #1716
  • Fixed MSSQL incorect query build when locks are used #1707
  • Allow to use first method as aliased select #1784
  • Alter column for nullability, type and default value #46, #1759
  • Add more having methods / join clause on methods #1674
  • Compatibility fixes and cleanups #1788, #1792, #1794, #1814, #1857, #1649
0.12.619 Oct, 2016
  • Address warnings mentioned in #1388 (#1740)
  • Remove postinstall script (#1746)
0.12.512 Oct, 2016
  • Fix broken 0.12.4 build (removed from npm)
  • Fix #1733, #920, incorrect postgres array bindings
0.12.39 Oct, 2016
  • Fix #1703, #1694 - connections should be returned to pool if acquireConnectionTimeout is triggered
  • Fix #1710 regression in postgres array escaping
0.12.227 Sep, 2016
  • Restore pool min: 1 for sqlite3, #1701
  • Fix for connection error after it's closed / released, #1691
  • Fix oracle prefetchRowCount setting, #1675
0.12.116 Sep, 2016
  • Fix MSSQL sql execution error, #1669
  • Added DEBUG=knex:bindings for debugging query bindings, #1557
0.12.013 Sep, 2016
  • Remove build / built files, #1616
  • Upgrade to Babel 6, #1617
  • Reference Bluebird module directly, remove deprecated .exec method, #1618
  • Remove documentation files from main repo
  • Fix broken behavior on WebSQL build, #1638
  • Oracle id sequence now handles manual inserts, #906
  • Cleanup PG escaping, fix #1602, #1548
  • Added with to builder for common table expressions, #1599
  • Fix #1619, pluck with explicit column names
  • Switching back to generic-pool for pooling resource management
  • Removed index.html, please direct all PR's for docs against the files in knex/documentation
0.11.109 Aug, 2016
  • Added CHANGELOG.md for a new documentation builder coming soon, #1615
  • Minor documentation tweaks
  • PG: Fix Uint8Array being considered undefined, #1601
  • MSSQL: Make columnInfo schema dynamic, #1585
0.11.921 Jul, 2016

Reverted knex client breaking change (commit b74cd69e906), fixes #1587

0.11.821 Jul, 2016
  • Oracledb dialect #990
  • Documentation fix #1532
  • Allow named bindings to be escaped. #1576
  • Several bugs with MS SQL schema creation and installing from gihub fix #1577
  • Fix incorrect escaping of backslashes in SqlString.escape #1545
0.11.719 Jun, 2016

Add missing dependency. #1516

0.11.618 Jun, 2016
  • Allow cancellation on timeout (MySQL) #1454
  • Better bigint support. (MSSQL) #1445
  • More consistent handling of undefined values in QueryBuilder#where and Raw. #1459
  • Fix Webpack build. #1447
  • Fix code that triggered Bluebird warnings. #1460, #1489
  • Fix ping function. (Oracle) #1486
  • Fix columnInfo. (MSSQL) #1464
  • Fix ColumnCompiler#binary. (MSSQL) #1464
  • Allow connection strings that do not contain a password. #1473
  • Fix race condition in seed stubs. #1493
  • Give each query a UUID. #1510
0.11.526 May, 2016

Bugfix: Using Raw or QueryBuilder as a binding to Raw now works as intended

0.11.422 May, 2016
  • Bugfix: Inconsistency of .primary() and .dropPrimary() between dialects #1430
  • Feature: Allow using custom Client/Dialect (you can pass your own client in knex config) #1428
  • Docs: Add documentation for .dropTimestamps #1432
  • Bugfix: Fixed passing undefined fields for insert/update inside transaction #1423
  • Feature: batchInsert with existing transaction #1354
  • Build: eslint instead of jshint #1416
  • Bugfix: Pooled connections not releasing #1382
  • Bugfix: Support passing knex.raw to .whereNot #1402
  • Docs: Fixed list of dialects which supports .returning #1398
  • Bugfix: rename table does not fail anymore even with schema defined #1403
0.11.314 May, 2016

Support nested joins. #1397

0.11.214 May, 2016
  • Prevent crash on knex seed:make. #1389
  • Improvements to batchInsert. #1391
  • Improvements to inserting DEFAULT with undefined binding. #1396
  • Correct generated code for adding/dropping multiple columns. (MSSQL) #1401
0.11.16 May, 2016

Fix error in CLI command migrate:make. #1386

0.11.05 May, 2016
Breaking Changes:
  • QueryBuilder#orWhere joins multiple arguments with AND. #1164
Other Changes:
  • Collate for columns. (MySQL) #1147
  • Add QueryBuilder#timeout, Raw#timeout. #1201 #1260
  • Exit with error code when appropriate. #1238
  • MSSQL connection accepts host as an alias for server in accordance with other dialects. #1239
  • Add query-response event. #1231
  • Correct behaviour of sibling nested transactions. #1226
  • Support RETURNING with UPDATE. (Oracle) #1253
  • Throwing callbacks from transactions automatically rolls them back. #1257
  • Fixes to named Raw bindings. #1251
  • timestamps accepts an argument to set NOT NULL and default to current timestamp.
  • Add TableBuilder#inherits for PostgreSQL. #601
  • Wrap index names. #1289
  • Restore coffeescript knexfiles and configurations. #1292
  • Add andWhereBetween and andWhereNotBetween #1132
  • Fix valueForUndefined failure. #1269
  • renameColumn no longer drops default value or nullability. #1326
  • Correct MySQL2 error handling. #1315
  • Fix MSSQL createTableIfNotExists. #1362
  • Fix MSSQL URL parsing. #1342
  • Update Lodash to 4.6.0 #1242
  • Update Bluebird to 3.3.4 #1279
0.10.015 Feb, 2016
Breaking Changes:
  • insert and update now ignore undefined values. Back compatibility is provided through the option useNullAsDefault. #1174, #1043
Other Changes:
  • Add countDistinct, avgDistinct and sumDistinct. #1046
  • Add schema.jsonb. Deprecated schema.json(column, true). #991
  • Support binding identifiers with ??. #1103
  • Restore query event when triggered by transactions. #855
  • Correct question mark escaping in rendered queries. #519, #1058
  • Add per-dialect escaping, allowing quotes to be escaped correctly. #886, #1095
  • Add MSSQL support. #1090
  • Add migration locking. #1094
  • Allow column aliases to contain .. #1181
  • Add batchInsert. #1182
  • Support non-array arguments to knex.raw.
  • Global query-error event. #1163
  • Add batchInsert. #1182
  • Better support for Mysql2 dialect options. #980
  • Support for acquireConnectionTimeout default 60 seconds preventing #1040 from happening. #1177
  • Fixed constraint name escaping when dropping a constraint. #1177
  • Show also .raw queries in debug output. #1169
  • Support for cli to use basic configuration without specific environment set. #1101
0.9.0Nov 2, 2015
  • Fix error when merging knex.raw instances without arguments. #853
  • Fix error that caused the connection to time out while streaming. #849
  • Correctly parse SSL query parameter for PostgreSQL. #852
  • Pass compress option to MySQL2. #843
  • Schema: Use timestamp with timezone by default for time, datetime and timestamp for Oracle. #876
  • Add QueryBuilder#modify #881
  • Add LiveScript and Early Gray support for seeds and migrations.
  • Add QueryBuilder#withSchema #518
  • Allow escaping of ? in knex.raw queries. #946
  • Allow 0 in join clause. #953
  • Add migration config to allow disabling/enabling transactions per migration. #834
0.8.6May 20, 2015

Fix for several transaction / migration issues, #832, #833, #834, #835

0.8.5May 14, 2015

Pool should be initialized if no pool options are specified

0.8.4May 13, 2015

Pool should not be initialized if {max: 0} is sent in config options

0.8.3May 2, 2015

Alias postgresql -> postgres in connection config options

0.8.2May 1, 2015

Fix regression in using query string in connection config

0.8.1May 1, 2015
  • Warn rather than error when implicit commits wipe out savepoints in mysql / mariadb, #805.
  • Fix for incorrect seed config reference, #804
0.8.0Apr 30, 2015
New Features:
  • Fixes several major outstanding bugs with the connection pool, switching to Pool2 in place of generic-pool-redux
  • strong-oracle module support
  • Nested transactions automatically become savepoints, with commit & rollback releasing or rolling back the current savepoint.
  • Database seed file support, #391
  • Improved support for sub-raw queries within raw statements
  • Migrations are now wrapped in transactions where possible
  • Subqueries supported in insert statements, #627
  • Support for nested having, #572
  • Support object syntax for joins, similar to "where" #743
Major Changes:
  • Transactions are immediately invoked as A+ promises, #470 (this is a feature and should not actually break anything in practice)
  • Heavy refactoring internal APIs (public APIs should not be affected)
"Other Changes:
  • Allow mysql2 to use non-default port, #588
  • Support creating & dropping extensions in PostgreSQL, #540
  • CLI support for knexfiles that do not provide environment keys, #527
  • Added sqlite3 dialect version of whereRaw/andWhereRaw (#477)
0.7.5Mar 9, 2015

Fix bug in validateMigrationList, (#697)

0.7.4Feb 25, 2015
  • Fix incorrect order of query parameters when using subqueries, #704
  • Properly handle limit 0, (#655)
  • Apply promise args from then instead of explicitly passing.
  • Respect union parameter as last argument (#660).
  • Added sqlite3 dialect version of whereRaw/andWhereRaw (#477).
  • Fix SQLite dropColumn doesn't work for last column (#544).
  • Add POSIX operator support for Postgres (#562)
  • Sample seed files now correctly (#391)
0.7.3Oct 3, 2014
  • Support for join(table, rawOrBuilder) syntax.
  • Fix for regression in PostgreSQL connection (#516)
0.7.2Oct 1, 2014

Fix for regression in migrations

0.7.1Oct 1, 2014

Better disconnect handling & pool removal for MySQL clients, #452

0.7.0Oct 1, 2014
New Features
  • Oracle support, #419
  • Database seed file support, #391
  • Improved support for sub-raw queries within raw statements
Breaking Changes
  • "collate nocase" no longer used by default in sqlite3 #396
Other Changes
  • Bumping Bluebird to ^2.x
  • Transactions in websql are now a no-op (unsupported) #375
  • Improved test suite
  • knex.fn namespace as function helper (knex.fn.now), #372
  • Better handling of disconnect errors
  • Support for offset without limit, #446
  • Chainable first method for mysql schema, #406
  • Support for empty array in whereIn
  • Create/drop schema for postgres, #511
  • Inserting multiple rows with default values, #468
  • Join columns are optional for cross-join, #508
  • Flag for creating jsonb columns in Postgresql, #500
0.6.22July 10, 2014

Bug fix for properly binding postgresql streaming queries, (#363)

0.6.21July 9, 2014
  • Bug fix for raw queries not being transaction context aware, (#351).
  • Properly forward stream errors in sqlite3 runner, (#359)
0.6.20June 30, 2014

Allow case insensitive operators in sql clauses, (#344)

0.6.19June 27, 2014
  • Add groupByRaw / orderByRaw methods, better support for raw statements in group / order (#282).
  • Support more config options for node-mysql2 dialect (#341).
  • CLI help text fix, (#342)
0.6.18June 25, 2014

Patch for the method, calling without a handler should return the stream, not a promise (#337)

0.6.17June 23, 2014

Adding missing map / reduce proxies to bluebird's implementation

0.6.16June 18, 2014
  • Increment / decrement returns the number of affectedRows (#330).
  • Allow --cwd option flag to be passed to CLI tool (#326)
0.6.15June 14, 2014

Added the as method for aliasing subqueries

0.6.14June 14, 2014

whereExists / whereNotExists may now take a query builder instance as well as a callback

0.6.13June 12, 2014
  • Fix regression with onUpdate / onDelete in PostgreSQL, (#308).
  • Add missing Promise require to knex.js, unit test for knex.destroy (#314)
0.6.12June 10, 2014

Fix for regression with boolean default types in PostgreSQL

0.6.11June 10, 2014

Fix for regression with queries containing multiple order by statements in sqlite3

0.6.10June 10, 2014

Fix for big regression in memoization of column names from 0.5 -> 0.6

0.6.9June 9, 2014

Fix for regression in specificType method

0.6.8June 9, 2014

Package.json fix for CLI

0.6.7June 9, 2014
  • Adds support for node-mysql2 library.
  • Bundles CLI with the knex install, various related migrate CLI fixes
0.6.6June 9, 2014
  • console.warn rather than throw when adding foreignKeys in SQLite3.
  • Add support for dropColumn in SQLite3.
  • Document raw.wrap
0.6.5June 9, 2014

Add missing _ require to WebSQL builds

0.6.4June 9, 2014

Fix & document schema.raw method

0.6.3June 6, 2014
  • Schema methods on transaction object are now transaction aware (#301).
  • Fix for resolved value from transactions, (#298).
  • Undefined columns are not added to builder
0.6.2June 4, 2014
  • Fix regression in raw query output, (#297).
  • Fix regression in "pluck" method (#296).
  • Document first method
0.6.1June 4, 2014

Reverting to using .npmignore, the "files" syntax forgot the knex.js file

0.6.0June 4, 2014
Major Library refactor:
  • Major internal overhaul to clean up the various dialect code.
  • Improved unit test suite.
  • Support for the mariasql driver.
  • More consistent use of raw query bindings throughout the library.
  • Queries are more composable, may be injected in various points throughout the builder.
  • Added streaming interface
  • Deprecated 5 argument join in favor of additional join methods.
  • The wrapValue function to allow for array column operations in PostgreSQL (#287).
  • An explicit connection can be passed for any query (#56).
  • Drop column support for sqlite3
  • All schema actions are run sequentially on the same connection if chained.
  • Schema actions can now be wrapped in a transaction
  • .references(tableName.columnName) as shorthand for .references(columnName).inTable(tableName)
  • .join('table.column', 'otherTable.column') as shorthand for .join('table.column', '=', 'otherTable.column')
  • Streams are supported for selects, passing through to the streaming capabilities of node-mysql and node-postgres
  • For More information, see this pull-request
0.5.15June 4, 2014

Dropped indexes feature now functions correctly, (#278)

0.5.14May 6, 2014

Remove the charset encoding if it's utf8 for mysql, as it's the default but also currently causes some issues in recent versions of node-mysql

0.5.13April 2, 2014

Fix regression in array bindings for postgresql (#228)

0.5.12Mar 31, 2014

Add more operators for where clauses, including && (#226)

0.5.11Mar 25, 2014
  • .where(col, 'is', null) or .where(col, 'is not', null) are not supported (#221).
  • Case insensitive where operators now allowed (#212).
  • Fix bug in increment/decrement truncating to an integer (#210).
  • Disconnected connections are now properly handled & removed from the pool (#206).
  • Internal tweaks to binding concatenations for performance (#207)
0.5.10Mar 19, 2014

Add the .exec method to the internal promise shim

0.5.9Mar 18, 2014

Remove error'ed connections from the connection pool (#206), added support for node-postgres-pure (pg.js) (#200)

0.5.8Feb 27, 2014

Fix for chaining on forUpdate / forShare, adding map & reduce from bluebird

0.5.7Feb 18, 2014

Fix for a null limit / offset breaking query chain (#182)

0.5.6Feb 5, 2014

Bump bluebird dependency to ~1.0.0, fixing regression in Bluebird 1.0.2 (#176)

0.5.5Jan 28, 2014
  • Fix for the exit code on the migrations cli (#151).
  • The init method in knex.migrate now uses this.config if one isn't passed in (#156)
0.5.4Jan 7, 2014

Fix for using raw statements in defaultTo schema builder methods (#146)

0.5.3Jan 2, 2014

Fix for incorrectly formed sql when aggregates are used with columns (#144)

0.5.2Dec 18, 2013

Adding passthrough "catch", "finally" to bluebird implementations, use bluebird's "nodeify" internally for exec

0.5.1Dec 12, 2013
  • The returning in PostgreSQL may now accept * or an array of columns to return. If either of these are passed, the response will be an array of objects rather than an array of values. Updates may also now use a returning value. (#132)
  • Added bigint and bigserial type to PostgreSQL. (#111)
  • Fix for the specificType schema call (#118)
  • Several fixes for migrations, including migration file path fixes, passing a Promise constructor to the migration up and down methods, allowing the "knex" module to be used globally, file ordering on migrations, and other small improvements. (#112-115, #125, #135)
0.5.0Nov 25, 2013
  • Initial pass at a migration api.
  • Aggregate methods are no longer aliased as "aggregate", but may now be aliased and have more than one aggregate in a query (#108, #110).
  • Adding bigint and bigserial to PostgreSQL (#111).
  • Bugfix on increment/decrement values (#100).
  • Bugfix with having method (#107).
  • Switched from when.js to bluebird for promise implementation, with shim for backward compatibility.
  • Switched from underscore to lodash, for semver reliability
0.4.13Oct 31, 2013

Fix for aggregate methods on toString and clone, (#98)

0.4.12Oct 29, 2013

Fix incorrect values passed to float in MySQL and decimal in PostgreSQL

0.4.11Oct 15, 2013

Fix potential sql injection vulnerability in orderBy, thanks to @sebgie

0.4.10Oct 14, 2013
  • Added forUpdate and forShare for select modes in transactions. (#84)
  • Fix bug where current query chain type is not copied on clone. (#90)
  • Charset and collate are now added as methods on the schema builder. (#89)
  • Added into as an alias of from, for builder syntax of: insert(value).into(tableName)
  • Internal pool fixes. (#90)
0.4.9Oct 7, 2013
  • Fix for documentation of hasColumn, ensure that hasColumn works with MySQL (#87).
  • More cleanup of error messages, showing the original error message concatenated with the sql and bindings
0.4.8Oct 2, 2013

Connections are no longer pushed back into the pool if they never existed to begin with (#85)

0.4.7Sep 27, 2013

The column is now a documented method on the builder api, and takes either an individual column or an array of columns to select

0.4.6Sep 25, 2013

Standardizing handling of errors for easier debugging, as noted in (#39)

0.4.5Sep 24, 2013

Fix for hasTable always returning true in MySQL (#82), fix where sql queries were duplicated with multiple calls on toSql with the schema builder

0.4.4Sep 22, 2013

Fix for debug method not properly debugging individual queries

0.4.3Sep 18, 2013

Fix for underscore not being defined in various grammar files

0.4.2Sep 17, 2013

Fix for an error being thrown when an initialized ClientBase instance was passed into Knex.initialize. pool.destroy now optionally accepts a callback to notify when it has completed draining and destroying all connections

0.4.1Sep 16, 2013

Cleanup from the 0.4.0 release, fix a potential exploit in "where" clauses pointed out by Andri Möll, fix for clients not being properly released from the pool #70, fix for where("foo", "<>", null) doing an "IS NULL" statement

0.4.0Sep 13, 2013
Breaking Changes:
  • Global state is no longer stored in the library, an instance is returned from Knex.initialize, so you will need to call this once and then reference this knex client elsewhere in your application.
  • Lowercasing of knex.raw, knex.transaction, and knex.schema.
  • Created columns are now nullable by default, unless notNullable is chained as an option.
  • Keys created with increments are now assumed to be unsigned (MySQL) by default.
  • The destroyAllNow is no longer called by the library on process.exit event. If you need to call it explicitly yourself, you may use knex.client.destroyPool
0.2.6Aug 29, 2013

Reject the transaction promise if the transaction "commit" fails, (#50)

0.2.5Aug 25, 2013

Fix error if a callback isn't specified for exec, (#49)

0.2.4Aug 22, 2013

Fix SQLite3 delete not returning affected row count, (#45)

0.2.3Aug 22, 2013

Fix insert with default values in PostgreSQL and SQLite3, (#44)

0.2.2Aug 20, 2013

Allowing Raw queries to be passed as the primary table names

0.2.1Aug 13, 2013

Fix for an array passed to insert being mutated

0.2.0Aug 7, 2013
Breaking changes:
  • hasTable now returns a boolean rather than a failed promise.
  • Changed syntax for insert in postgresql, where the id is not assumed on inserts (#18). The second parameter of insert is now required to return an array of insert id's for the last insert.
  • The timestamp method on the schema builder now uses a dateTime rather than a timestamp
0.1.8July 7, 2013

Somehow missing the != operator. Using .find rather than .where in getCommandsByName(#22)

0.1.7June 12, 2013

Ensures unhandled errors in the exec callback interface are re-thrown

0.1.6June 9, 2013

Renaming beforeCreate to afterCreate. Better handling of errors in the connection pooling

0.1.5June 9, 2013

Added the ability to specify beforeCreate and beforeDestroy hooks on the initialize's options.pool to perform any necessary database setup/teardown on connections before use (#14). where and having may now accept Knex.Raw instances, for consistency (#15). Added an orHaving method to the builder. The ability to specify bindings on Raw queries has been removed

0.1.4May 22, 2013

defaultTo now accepts "false" for boolean columns, allows for empty strings as default values

0.1.3May 18, 2013

Enabling table aliases (#11). Fix for issues with transactions not functioning (#12)

0.1.2May 15, 2013

Bug fixes for groupBy (#7). Mysql using collation, charset config settings in createTable. Added engine on schemaBuilder specifier (#6). Other doc fixes, tests

0.1.1May 14, 2013

Bug fixes for sub-queries, minor changes to initializing "main" instance, adding "pg" as a valid parameter for the client name in the connection settings

0.1.0May 13, 2013

Initial Knex release