mongodb
- Version 6.12.0
- Published
- 3.74 MB
- 3 dependencies
- Apache-2.0 license
Install
npm i mongodb
yarn add mongodb
pnpm add mongodb
Overview
The official MongoDB driver for Node.js
Index
Variables
- AuthMechanism
- AutoEncryptionLoggerLevel
- BatchType
- Compressor
- CURSOR_FLAGS
- CursorTimeoutMode
- ExplainVerbosity
- GSSAPICanonicalizationValue
- LEGAL_TCP_SOCKET_OPTIONS
- LEGAL_TLS_SOCKET_OPTIONS
- MONGO_CLIENT_EVENTS
- MongoErrorLabel
- ProfilingLevel
- ReadConcernLevel
- ReadPreferenceMode
- ReturnDocument
- ServerApiVersion
- ServerMonitoringMode
- ServerType
- TopologyType
Classes
ClientSession
- abortTransaction()
- advanceClusterTime()
- advanceOperationTime()
- clientOptions
- clusterTime
- commitTransaction()
- defaultTransactionOptions
- endSession()
- equals()
- explicit
- hasEnded
- id
- incrementTransactionNumber()
- inTransaction()
- isPinned
- loadBalanced
- operationTime
- serverSession
- snapshotEnabled
- startTransaction()
- supports
- timeoutMS
- toBSON()
- transaction
- withTransaction()
Collection
- aggregate()
- bsonOptions
- bulkWrite()
- collectionName
- count()
- countDocuments()
- createIndex()
- createIndexes()
- createSearchIndex()
- createSearchIndexes()
- dbName
- deleteMany()
- deleteOne()
- distinct()
- drop()
- dropIndex()
- dropIndexes()
- dropSearchIndex()
- estimatedDocumentCount()
- find()
- findOne()
- findOneAndDelete()
- findOneAndReplace()
- findOneAndUpdate()
- hint
- indexes()
- indexExists()
- indexInformation()
- initializeOrderedBulkOp()
- initializeUnorderedBulkOp()
- insertMany()
- insertOne()
- isCapped()
- listIndexes()
- listSearchIndexes()
- namespace
- options()
- readConcern
- readPreference
- rename()
- replaceOne()
- timeoutMS
- updateMany()
- updateOne()
- updateSearchIndex()
- watch()
- writeConcern
Db
- admin()
- aggregate()
- bsonOptions
- collection()
- collections()
- command()
- createCollection()
- createIndex()
- databaseName
- dropCollection()
- dropDatabase()
- indexInformation()
- listCollections()
- namespace
- options
- profilingLevel()
- readConcern
- readPreference
- removeUser()
- renameCollection()
- runCursorCommand()
- secondaryOk
- setProfilingLevel()
- stats()
- SYSTEM_COMMAND_COLLECTION
- SYSTEM_INDEX_COLLECTION
- SYSTEM_JS_COLLECTION
- SYSTEM_NAMESPACE_COLLECTION
- SYSTEM_PROFILE_COLLECTION
- SYSTEM_USER_COLLECTION
- timeoutMS
- watch()
- writeConcern
ServerDescription
- $clusterTime
- address
- allHosts
- arbiters
- electionId
- equals()
- error
- host
- hostAddress
- hosts
- iscryptd
- isDataBearing
- isReadable
- isWritable
- lastUpdateTime
- lastWriteDate
- logicalSessionTimeoutMinutes
- maxBsonObjectSize
- maxMessageSizeBytes
- maxWireVersion
- maxWriteBatchSize
- me
- minRoundTripTime
- minWireVersion
- passives
- port
- primary
- roundTripTime
- setName
- setVersion
- tags
- topologyVersion
- type
Interfaces
MongoClientOptions
- appName
- auth
- authMechanism
- authMechanismProperties
- authSource
- autoEncryption
- compressors
- connectTimeoutMS
- directConnection
- driverInfo
- forceServerObjectId
- heartbeatFrequencyMS
- journal
- loadBalanced
- localThresholdMS
- maxConnecting
- maxIdleTimeMS
- maxPoolSize
- maxStalenessSeconds
- minHeartbeatFrequencyMS
- minPoolSize
- monitorCommands
- noDelay
- pkFactory
- proxyHost
- proxyPassword
- proxyPort
- proxyUsername
- readConcern
- readConcernLevel
- readPreference
- readPreferenceTags
- replicaSet
- retryReads
- retryWrites
- serverApi
- serverMonitoringMode
- serverSelectionTimeoutMS
- socketTimeoutMS
- srvMaxHosts
- srvServiceName
- ssl
- timeoutMS
- tls
- tlsAllowInvalidCertificates
- tlsAllowInvalidHostnames
- tlsCAFile
- tlsCertificateKeyFile
- tlsCertificateKeyFilePassword
- tlsCRLFile
- tlsInsecure
- w
- waitQueueTimeoutMS
- writeConcern
- wtimeoutMS
- zlibCompressionLevel
Type Aliases
- AbstractCursorEvents
- AcceptedFields
- AddToSetOperators
- AlternativeType
- AnyBulkWriteOperation
- AnyClientBulkWriteModel
- AnyError
- ArrayElement
- ArrayOperator
- AuthMechanism
- AutoEncryptionExtraOptions
- AutoEncryptionLoggerLevel
- AzureKMSProviderConfiguration
- BatchType
- BitwiseFilter
- BSONTypeAlias
- Callback
- ChangeStreamDocument
- ChangeStreamEvents
- ClientBulkWriteModel
- ClientEncryptionDataKeyProvider
- ClientEncryptionSocketOptions
- ClientEncryptionTlsOptions
- ClientSessionEvents
- CommonEvents
- Compressor
- CompressorName
- Condition
- ConnectionEvents
- ConnectionPoolEvents
- CSFLEKMSTlsOptions
- CursorFlag
- CursorTimeoutMode
- DistinctOptions
- DropDatabaseOptions
- DropIndexesOptions
- EnhancedOmit
- EventEmitterWithState
- EventsDescription
- ExplainVerbosity
- ExplainVerbosityLike
- Filter
- FilterOperations
- Flatten
- GCPKMSProviderConfiguration
- GenericListener
- GridFSBucketEvents
- GSSAPICanonicalizationValue
- Hint
- IndexDescriptionCompact
- IndexDescriptionInfo
- IndexDirection
- IndexSpecification
- InferIdType
- IntegerType
- IsAny
- Join
- KeysOfAType
- KeysOfOtherType
- ListIndexesOptions
- ListSearchIndexesOptions
- MatchKeysAndValues
- MongoClientEvents
- MongoErrorLabel
- MonitorEvents
- NestedPaths
- NestedPathsOfType
- NonObjectIdLikeDocument
- NotAcceptedFields
- NumericType
- OIDCCallbackFunction
- OneOrMore
- OnlyFieldsOfType
- OperationTime
- OptionalId
- OptionalUnlessRequiredId
- ProfilingLevel
- ProfilingLevelOptions
- PropertyType
- PullAllOperator
- PullOperator
- PushOperator
- ReadConcernLevel
- ReadConcernLike
- ReadPreferenceLike
- ReadPreferenceMode
- RegExpOrString
- RemoveUserOptions
- ResumeToken
- ReturnDocument
- RunCommandOptions
- RunCursorCommandOptions
- SchemaMember
- ServerApiVersion
- ServerEvents
- ServerMonitoringMode
- ServerSessionId
- ServerType
- SetFields
- SetProfilingLevelOptions
- Sort
- SortDirection
- Stream
- StrictFilter
- StrictMatchKeysAndValues
- StrictUpdateFilter
- SupportedNodeConnectionOptions
- SupportedSocketOptions
- SupportedTLSConnectionOptions
- SupportedTLSSocketOptions
- TagSet
- TopologyEvents
- TopologyType
- UpdateFilter
- W
- WithId
- WithoutId
- WithSessionCallback
- WithTransactionCallback
Variables
variable AuthMechanism
const AuthMechanism: Readonly<{ readonly MONGODB_AWS: 'MONGODB-AWS'; readonly MONGODB_CR: 'MONGODB-CR'; readonly MONGODB_DEFAULT: 'DEFAULT'; readonly MONGODB_GSSAPI: 'GSSAPI'; readonly MONGODB_PLAIN: 'PLAIN'; readonly MONGODB_SCRAM_SHA1: 'SCRAM-SHA-1'; readonly MONGODB_SCRAM_SHA256: 'SCRAM-SHA-256'; readonly MONGODB_X509: 'MONGODB-X509'; readonly MONGODB_OIDC: 'MONGODB-OIDC';}>;
Modifiers
@public
variable AutoEncryptionLoggerLevel
const AutoEncryptionLoggerLevel: Readonly<{ readonly FatalError: 0; readonly Error: 1; readonly Warning: 2; readonly Info: 3; readonly Trace: 4;}>;
Modifiers
@public
variable BatchType
const BatchType: Readonly<{ readonly INSERT: 1; readonly UPDATE: 2; readonly DELETE: 3;}>;
Modifiers
@public
variable Compressor
const Compressor: Readonly<{ readonly none: 0; readonly snappy: 1; readonly zlib: 2; readonly zstd: 3;}>;
Modifiers
@public
variable CURSOR_FLAGS
const CURSOR_FLAGS: readonly [ 'tailable', 'oplogReplay', 'noCursorTimeout', 'awaitData', 'exhaust', 'partial'];
Modifiers
@public
variable CursorTimeoutMode
const CursorTimeoutMode: Readonly<{ readonly ITERATION: 'iteration'; readonly LIFETIME: 'cursorLifetime';}>;
Specifies how
timeoutMS
is applied to the cursor. Can be either'cursorLifeTime'
or'iteration'
When set to'iteration'
, the deadline specified bytimeoutMS
applies to each call ofcursor.next()
. When set to'cursorLifetime'
, the deadline applies to the life of the entire cursor.Depending on the type of cursor being used, this option has different default values. For non-tailable cursors, this value defaults to
'cursorLifetime'
For tailable cursors, this value defaults to'iteration'
since tailable cursors, by definition can have an arbitrarily long lifetime.Example 1
const cursor = collection.find({}, {timeoutMS: 100, timeoutMode: 'iteration'});for await (const doc of cursor) {// process doc// This will throw a timeout error if any of the iterator's `next()` calls takes more than 100ms, but// will continue to iterate successfully otherwise, regardless of the number of batches.}Example 2
const cursor = collection.find({}, { timeoutMS: 1000, timeoutMode: 'cursorLifetime' });const docs = await cursor.toArray(); // This entire line will throw a timeout error if all batches are not fetched and returned within 1000ms.Modifiers
@public
@experimental
variable ExplainVerbosity
const ExplainVerbosity: Readonly<{ readonly queryPlanner: 'queryPlanner'; readonly queryPlannerExtended: 'queryPlannerExtended'; readonly executionStats: 'executionStats'; readonly allPlansExecution: 'allPlansExecution';}>;
Modifiers
@public
variable GSSAPICanonicalizationValue
const GSSAPICanonicalizationValue: Readonly<{ readonly on: true; readonly off: false; readonly none: 'none'; readonly forward: 'forward'; readonly forwardAndReverse: 'forwardAndReverse';}>;
Modifiers
@public
variable LEGAL_TCP_SOCKET_OPTIONS
const LEGAL_TCP_SOCKET_OPTIONS: readonly [ 'autoSelectFamily', 'autoSelectFamilyAttemptTimeout', 'family', 'hints', 'localAddress', 'localPort', 'lookup'];
Modifiers
@public
variable LEGAL_TLS_SOCKET_OPTIONS
const LEGAL_TLS_SOCKET_OPTIONS: readonly [ 'allowPartialTrustChain', 'ALPNProtocols', 'ca', 'cert', 'checkServerIdentity', 'ciphers', 'crl', 'ecdhCurve', 'key', 'minDHSize', 'passphrase', 'pfx', 'rejectUnauthorized', 'secureContext', 'secureProtocol', 'servername', 'session'];
Modifiers
@public
variable MONGO_CLIENT_EVENTS
const MONGO_CLIENT_EVENTS: readonly [ 'connectionPoolCreated', 'connectionPoolReady', 'connectionPoolCleared', 'connectionPoolClosed', 'connectionCreated', 'connectionReady', 'connectionClosed', 'connectionCheckOutStarted', 'connectionCheckOutFailed', 'connectionCheckedOut', 'connectionCheckedIn', 'commandStarted', 'commandSucceeded', 'commandFailed', 'serverOpening', 'serverClosed', 'serverDescriptionChanged', 'topologyOpening', 'topologyClosed', 'topologyDescriptionChanged', 'error', 'timeout', 'close', 'serverHeartbeatStarted', 'serverHeartbeatSucceeded', 'serverHeartbeatFailed'];
Modifiers
@public
variable MongoErrorLabel
const MongoErrorLabel: Readonly<{ readonly RetryableWriteError: 'RetryableWriteError'; readonly TransientTransactionError: 'TransientTransactionError'; readonly UnknownTransactionCommitResult: 'UnknownTransactionCommitResult'; readonly ResumableChangeStreamError: 'ResumableChangeStreamError'; readonly HandshakeError: 'HandshakeError'; readonly ResetPool: 'ResetPool'; readonly PoolRequstedRetry: 'PoolRequstedRetry'; readonly InterruptInUseConnections: 'InterruptInUseConnections'; readonly NoWritesPerformed: 'NoWritesPerformed';}>;
Modifiers
@public
variable ProfilingLevel
const ProfilingLevel: Readonly<{ readonly off: 'off'; readonly slowOnly: 'slow_only'; readonly all: 'all';}>;
Modifiers
@public
variable ReadConcernLevel
const ReadConcernLevel: Readonly<{ readonly local: 'local'; readonly majority: 'majority'; readonly linearizable: 'linearizable'; readonly available: 'available'; readonly snapshot: 'snapshot';}>;
Modifiers
@public
variable ReadPreferenceMode
const ReadPreferenceMode: Readonly<{ readonly primary: 'primary'; readonly primaryPreferred: 'primaryPreferred'; readonly secondary: 'secondary'; readonly secondaryPreferred: 'secondaryPreferred'; readonly nearest: 'nearest';}>;
Modifiers
@public
variable ReturnDocument
const ReturnDocument: Readonly<{ readonly BEFORE: 'before'; readonly AFTER: 'after';}>;
Modifiers
@public
variable ServerApiVersion
const ServerApiVersion: Readonly<{ readonly v1: '1' }>;
Modifiers
@public
variable ServerMonitoringMode
const ServerMonitoringMode: Readonly<{ readonly auto: 'auto'; readonly poll: 'poll'; readonly stream: 'stream';}>;
Modifiers
@public
variable ServerType
const ServerType: Readonly<{ readonly Standalone: 'Standalone'; readonly Mongos: 'Mongos'; readonly PossiblePrimary: 'PossiblePrimary'; readonly RSPrimary: 'RSPrimary'; readonly RSSecondary: 'RSSecondary'; readonly RSArbiter: 'RSArbiter'; readonly RSOther: 'RSOther'; readonly RSGhost: 'RSGhost'; readonly Unknown: 'Unknown'; readonly LoadBalancer: 'LoadBalancer';}>;
An enumeration of server types we know about
Modifiers
@public
variable TopologyType
const TopologyType: Readonly<{ readonly Single: 'Single'; readonly ReplicaSetNoPrimary: 'ReplicaSetNoPrimary'; readonly ReplicaSetWithPrimary: 'ReplicaSetWithPrimary'; readonly Sharded: 'Sharded'; readonly Unknown: 'Unknown'; readonly LoadBalanced: 'LoadBalanced';}>;
An enumeration of topology types we know about
Modifiers
@public
Classes
class AbstractCursor
abstract class AbstractCursor< TSchema = any, CursorEvents extends AbstractCursorEvents = AbstractCursorEvents > extends TypedEventEmitter<CursorEvents> implements AsyncDisposable_2 {}
Modifiers
@public
property CLOSE
static readonly CLOSE: string;
property closed
readonly closed: boolean;
The cursor is closed and all remaining locally buffered documents have been iterated.
property id
readonly id: any;
The cursor has no id until it receives a response from the initial cursor creating command.
It is non-zero for as long as the database has an open cursor.
The initiating command may receive a zero id if the entire result is in the
firstBatch
.
property killed
readonly killed: boolean;
A
killCursors
command was attempted on this cursor. This is performed if the cursor id is non zero.
property loadBalanced
readonly loadBalanced: boolean;
property namespace
readonly namespace: MongoDBNamespace;
property readConcern
readonly readConcern: ReadConcern;
property readPreference
readonly readPreference: ReadPreference;
method [Symbol.asyncIterator]
[Symbol.asyncIterator]: () => AsyncGenerator<TSchema, void, void>;
method addCursorFlag
addCursorFlag: (flag: CursorFlag, value: boolean) => this;
Add a cursor flag to the cursor
Parameter flag
The flag to set, must be one of following ['tailable', 'oplogReplay', 'noCursorTimeout', 'awaitData', 'partial' -.
Parameter value
The flag boolean value.
method batchSize
batchSize: (value: number) => this;
Set the batch size for the cursor.
Parameter value
The number of documents to return per batch. See find command documentation.
method bufferedCount
bufferedCount: () => number;
Returns current buffered documents length
method clone
abstract clone: () => AbstractCursor<TSchema>;
Returns a new uninitialized copy of this cursor, with options matching those that have been set on the current instance
method close
close: (options?: { timeoutMS?: number }) => Promise<void>;
Frees any client-side resources used by the cursor.
method forEach
forEach: (iterator: (doc: TSchema) => boolean | void) => Promise<void>;
Iterates over all the documents for this cursor using the iterator, callback pattern.
If the iterator returns
false
, iteration will stop.Parameter iterator
The iteration callback.
Deprecated
- Will be removed in a future release. Use for await...of instead.
method hasNext
hasNext: () => Promise<boolean>;
method map
map: <T = any>(transform: (doc: TSchema) => T) => AbstractCursor<T>;
Map all documents using the provided function If there is a transform set on the cursor, that will be called first and the result passed to this function's transform.
Parameter transform
The mapping transformation method.
Remarks
**Note** Cursors use
null
internally to indicate that there are no more documents in the cursor. Providing a mapping function that maps values tonull
will result in the cursor closing itself before it has finished iterating all documents. This will **not** result in a memory leak, just surprising behavior. For example:const cursor = collection.find({});cursor.map(() => null);const documents = await cursor.toArray();// documents is always [], regardless of how many documents are in the collection.Other falsey values are allowed:
const cursor = collection.find({});cursor.map(() => '');const documents = await cursor.toArray();// documents is now an array of empty strings**Note for Typescript Users:** adding a transform changes the return type of the iteration of this cursor, it **does not** return a new instance of a cursor. This means when calling map, you should always assign the result to a new variable in order to get a correctly typed cursor variable. Take note of the following example:
Example 1
const cursor: FindCursor<Document> = coll.find();const mappedCursor: FindCursor<number> = cursor.map(doc => Object.keys(doc).length);const keyCounts: number[] = await mappedCursor.toArray(); // cursor.toArray() still returns Document[]
method maxTimeMS
maxTimeMS: (value: number) => this;
Set a maxTimeMS on the cursor query, allowing for hard timeout limits on queries (Only supported on MongoDB 2.6 or higher)
Parameter value
Number of milliseconds to wait before aborting the query.
method next
next: () => Promise<TSchema | null>;
Get the next available document from the cursor, returns null if no more documents are available.
method readBufferedDocuments
readBufferedDocuments: (number?: number) => NonNullable<TSchema>[];
Returns current buffered documents
method rewind
rewind: () => void;
Rewind this cursor to its uninitialized state. Any options that are present on the cursor will remain in effect. Iterating this cursor will cause new queries to be sent to the server, even if the resultant data has already been retrieved by this cursor.
method stream
stream: (options?: CursorStreamOptions) => Readable & AsyncIterable<TSchema>;
method toArray
toArray: () => Promise<TSchema[]>;
Returns an array of documents. The caller is responsible for making sure that there is enough memory to store the results. Note that the array only contains partial results when this cursor had been previously accessed. In that case, cursor.rewind() can be used to reset the cursor.
method tryNext
tryNext: () => Promise<TSchema | null>;
Try to get the next available document from the cursor or
null
if an empty batch is returned
method withReadConcern
withReadConcern: (readConcern: ReadConcernLike) => this;
Set the ReadPreference for the cursor.
Parameter readPreference
The new read preference for the cursor.
method withReadPreference
withReadPreference: (readPreference: ReadPreferenceLike) => this;
Set the ReadPreference for the cursor.
Parameter readPreference
The new read preference for the cursor.
class Admin
class Admin {}
The **Admin** class is an internal class that allows convenient access to the admin functionality and commands for MongoDB.
**ADMIN Cannot directly be instantiated**
Example 1
import { MongoClient } from 'mongodb';const client = new MongoClient('mongodb://localhost:27017');const admin = client.db().admin();const dbInfo = await admin.listDatabases();for (const db of dbInfo.databases) {console.log(db.name);}Modifiers
@public
method buildInfo
buildInfo: (options?: CommandOperationOptions) => Promise<Document>;
Retrieve the server build information
Parameter options
Optional settings for the command
method command
command: (command: Document, options?: RunCommandOptions) => Promise<Document>;
Execute a command
The driver will ensure the following fields are attached to the command sent to the server: -
lsid
- sourced from an implicit session or options.session -$readPreference
- defaults to primary or can be configured by options.readPreference -$db
- sourced from the name of this databaseIf the client has a serverApi setting: -
apiVersion
-apiStrict
-apiDeprecationErrors
When in a transaction: -
readConcern
- sourced from readConcern set on the TransactionOptions -writeConcern
- sourced from writeConcern set on the TransactionOptionsAttaching any of the above fields to the command will have no effect as the driver will overwrite the value.
Parameter command
The command to execute
Parameter options
Optional settings for the command
method listDatabases
listDatabases: (options?: ListDatabasesOptions) => Promise<ListDatabasesResult>;
List the available databases
Parameter options
Optional settings for the command
method ping
ping: (options?: CommandOperationOptions) => Promise<Document>;
Ping the MongoDB server and retrieve results
Parameter options
Optional settings for the command
method removeUser
removeUser: (username: string, options?: RemoveUserOptions) => Promise<boolean>;
Remove a user from a database
Parameter username
The username to remove
Parameter options
Optional settings for the command
method replSetGetStatus
replSetGetStatus: (options?: CommandOperationOptions) => Promise<Document>;
Get ReplicaSet status
Parameter options
Optional settings for the command
method serverInfo
serverInfo: (options?: CommandOperationOptions) => Promise<Document>;
Retrieve the server build information
Parameter options
Optional settings for the command
method serverStatus
serverStatus: (options?: CommandOperationOptions) => Promise<Document>;
Retrieve this db's server status.
Parameter options
Optional settings for the command
method validateCollection
validateCollection: ( collectionName: string, options?: ValidateCollectionOptions) => Promise<Document>;
Validate an existing collection
Parameter collectionName
The name of the collection to validate.
Parameter options
Optional settings for the command
class AggregationCursor
class AggregationCursor<TSchema = any> extends ExplainableCursor<TSchema> {}
The **AggregationCursor** class is an internal class that embodies an aggregation cursor on MongoDB allowing for iteration over the results returned from the underlying query. It supports one by one document iteration, conversion to an array or can be iterated as a Node 4.X or higher stream
Modifiers
@public
property pipeline
readonly pipeline: Document[];
method addStage
addStage: { (stage: Document): this; <T = Document>(stage: Document): AggregationCursor<T>;};
Add a stage to the aggregation pipeline
Example 1
const documents = await users.aggregate().addStage({ $match: { name: /Mike/ } }).toArray();Example 2
const documents = await users.aggregate().addStage<{ name: string }>({ $project: { name: true } }).toArray(); // type of documents is { name: string }[]
method clone
clone: () => AggregationCursor<TSchema>;
method explain
explain: { (): Promise<Document>; (verbosity: ExplainVerbosityLike | ExplainCommandOptions): Promise<Document>; (options: { timeoutMS?: number }): Promise<Document>; ( verbosity: ExplainVerbosityLike | ExplainCommandOptions, options: { timeoutMS?: number } ): Promise<Document>;};
Execute the explain for the cursor
method geoNear
geoNear: ($geoNear: Document) => this;
Add a geoNear stage to the aggregation pipeline
method group
group: <T = TSchema>($group: Document) => AggregationCursor<T>;
Add a group stage to the aggregation pipeline
method limit
limit: ($limit: number) => this;
Add a limit stage to the aggregation pipeline
method lookup
lookup: ($lookup: Document) => this;
Add a lookup stage to the aggregation pipeline
method map
map: <T>(transform: (doc: TSchema) => T) => AggregationCursor<T>;
method match
match: ($match: Document) => this;
Add a match stage to the aggregation pipeline
method out
out: ($out: { db: string; coll: string } | string) => this;
Add an out stage to the aggregation pipeline
method project
project: <T extends Document = Document>( $project: Document) => AggregationCursor<T>;
Add a project stage to the aggregation pipeline
Remarks
In order to strictly type this function you must provide an interface that represents the effect of your projection on the result documents.
**Note for Typescript Users:** adding a transform changes the return type of the iteration of this cursor, it **does not** return a new instance of a cursor. This means when calling project, you should always assign the result to a new variable in order to get a correctly typed cursor variable. Take note of the following example:
Example 1
// Best wayconst docs: AggregationCursor<{ a: number }> = cursor.project<{ a: number }>({ _id: 0, a: true });// Flexible wayconst docs: AggregationCursor<Document> = cursor.project({ _id: 0, a: true });Example 2
const cursor: AggregationCursor<{ a: number; b: string }> = coll.aggregate([]);const projectCursor = cursor.project<{ a: number }>({ _id: 0, a: true });const aPropOnlyArray: {a: number}[] = await projectCursor.toArray();// or always use chaining and save the final cursorconst cursor = coll.aggregate().project<{ a: string }>({_id: 0,a: { $convert: { input: '$a', to: 'string' }}});
method redact
redact: ($redact: Document) => this;
Add a redact stage to the aggregation pipeline
method skip
skip: ($skip: number) => this;
Add a skip stage to the aggregation pipeline
method sort
sort: ($sort: Sort) => this;
Add a sort stage to the aggregation pipeline
method unwind
unwind: ($unwind: Document | string) => this;
Add a unwind stage to the aggregation pipeline
class Batch
class Batch<T = Document> {}
Keeps the state of a unordered batch so we can rewrite the results correctly after command execution
Modifiers
@public
constructor
constructor(batchType: BatchType, originalZeroIndex: number);
property batchType
batchType: BatchType;
property currentIndex
currentIndex: number;
property operations
operations: T[];
property originalIndexes
originalIndexes: number[];
property originalZeroIndex
originalZeroIndex: number;
property size
size: number;
property sizeBytes
sizeBytes: number;
class BulkOperationBase
abstract class BulkOperationBase {}
Modifiers
@public
property batches
readonly batches: Batch<Document>[];
property bsonOptions
readonly bsonOptions: BSONSerializeOptions;
property isOrdered
isOrdered: boolean;
property length
readonly length: number;
property operationId
operationId?: number;
property writeConcern
readonly writeConcern: WriteConcern;
method addToOperationsList
abstract addToOperationsList: ( batchType: BatchType, document: Document | UpdateStatement | DeleteStatement) => this;
method execute
execute: (options?: BulkWriteOptions) => Promise<BulkWriteResult>;
method find
find: (selector: Document) => FindOperators;
Builds a find operation for an update/updateOne/delete/deleteOne/replaceOne. Returns a builder object used to complete the definition of the operation.
Example 1
const bulkOp = collection.initializeOrderedBulkOp();// Add an updateOne to the bulkOpbulkOp.find({ a: 1 }).updateOne({ $set: { b: 2 } });// Add an updateMany to the bulkOpbulkOp.find({ c: 3 }).update({ $set: { d: 4 } });// Add an upsertbulkOp.find({ e: 5 }).upsert().updateOne({ $set: { f: 6 } });// Add a deletionbulkOp.find({ g: 7 }).deleteOne();// Add a multi deletionbulkOp.find({ h: 8 }).delete();// Add a replaceOnebulkOp.find({ i: 9 }).replaceOne({writeConcern: { j: 10 }});// Update using a pipeline (requires Mongodb 4.2 or higher)bulk.find({ k: 11, y: { $exists: true }, z: { $exists: true } }).updateOne([{ $set: { total: { $sum: [ '$y', '$z' ] } } }]);// All of the ops will now be executedawait bulkOp.execute();
method insert
insert: (document: Document) => BulkOperationBase;
Add a single insert document to the bulk operation
Example 1
const bulkOp = collection.initializeOrderedBulkOp();// Adds three inserts to the bulkOp.bulkOp.insert({ a: 1 }).insert({ b: 2 }).insert({ c: 3 });await bulkOp.execute();
method raw
raw: (op: AnyBulkWriteOperation) => this;
Specifies a raw operation to perform in the bulk write.
class BulkWriteResult
class BulkWriteResult {}
The result of a bulk write.
Modifiers
@public
property deletedCount
readonly deletedCount: number;
Number of documents deleted.
property insertedCount
readonly insertedCount: number;
Number of documents inserted.
property insertedIds
readonly insertedIds: { [key: number]: any };
Inserted document generated Id's, hash key is the index of the originating operation
property matchedCount
readonly matchedCount: number;
Number of documents matched for update.
property modifiedCount
readonly modifiedCount: number;
Number of documents modified.
property ok
readonly ok: number;
Evaluates to true if the bulk operation correctly executes
property upsertedCount
readonly upsertedCount: number;
Number of documents upserted.
property upsertedIds
readonly upsertedIds: { [key: number]: any };
Upserted document generated Id's, hash key is the index of the originating operation
method getRawResponse
getRawResponse: () => Document;
Returns raw internal result
method getUpsertedIdAt
getUpsertedIdAt: (index: number) => Document | undefined;
Returns the upserted id at the given index
method getWriteConcernError
getWriteConcernError: () => WriteConcernError | undefined;
Retrieve the write concern error if one exists
method getWriteErrorAt
getWriteErrorAt: (index: number) => WriteError | undefined;
Returns a specific write error object
method getWriteErrorCount
getWriteErrorCount: () => number;
Returns the number of write errors off the bulk operation
method getWriteErrors
getWriteErrors: () => WriteError[];
Retrieve all write errors
method hasWriteErrors
hasWriteErrors: () => boolean;
Returns true if the bulk operation contains a write error
method isOk
isOk: () => boolean;
method toString
toString: () => string;
class CancellationToken
class CancellationToken extends TypedEventEmitter<{ cancel(): void;}> {}
Modifiers
@public
class ChangeStream
class ChangeStream< TSchema extends Document = Document, TChange extends Document = ChangeStreamDocument<TSchema> > extends TypedEventEmitter<ChangeStreamEvents<TSchema, TChange>> implements AsyncDisposable_2 {}
Creates a new Change Stream instance. Normally created using Collection.watch().
Modifiers
@public
property CHANGE
static readonly CHANGE: string;
Fired for each new matching change in the specified namespace. Attaching a
change
event listener to a Change Stream will switch the stream into flowing mode. Data will then be passed as soon as it is available.
property CLOSE
static readonly CLOSE: string;
property closed
readonly closed: boolean;
Is the cursor closed
property END
static readonly END: string;
property ERROR
static readonly ERROR: string;
property INIT
static readonly INIT: string;
property MORE
static readonly MORE: string;
property namespace
namespace: MongoDBNamespace;
property options
options: ChangeStreamOptions & { writeConcern?: never };
Remarks
WriteConcern can still be present on the options because we inherit options from the client/db/collection. The key must be present on the options in order to delete it. This allows typescript to delete the key but will not allow a writeConcern to be assigned as a property on options.
property parent
parent: MongoClient | Db | Collection<Document>;
property pipeline
pipeline: Document[];
property RESPONSE
static readonly RESPONSE: string;
property RESUME_TOKEN_CHANGED
static readonly RESUME_TOKEN_CHANGED: string;
Emitted each time the change stream stores a new resume token.
property resumeToken
readonly resumeToken: {};
The cached resume token that is used to resume after the most recently returned change.
property streamOptions
streamOptions?: CursorStreamOptions;
property type
type: Symbol;
method [Symbol.asyncIterator]
[Symbol.asyncIterator]: () => AsyncGenerator<TChange, void, void>;
method close
close: () => Promise<void>;
Frees the internal resources used by the change stream.
method hasNext
hasNext: () => Promise<boolean>;
Check if there is any document still available in the Change Stream
method next
next: () => Promise<TChange>;
Get the next available document from the Change Stream.
method stream
stream: (options?: CursorStreamOptions) => Readable & AsyncIterable<TChange>;
Return a modified Readable stream including a possible transform method.
NOTE: When using a Stream to process change stream events, the stream will NOT automatically resume in the case a resumable error is encountered.
Throws
MongoChangeStreamError if the underlying cursor or the change stream is closed
method tryNext
tryNext: () => Promise<TChange | null>;
Try to get the next available document from the Change Stream's cursor or
null
if an empty batch is returned
class ClientEncryption
class ClientEncryption {}
The public interface for explicit in-use encryption
Modifiers
@public
constructor
constructor(client: MongoClient, options: ClientEncryptionOptions);
Create a new encryption instance
Example 1
new ClientEncryption(mongoClient, {keyVaultNamespace: 'client.encryption',kmsProviders: {local: {key: masterKey // The master key used for encryption/decryption. A 96-byte long Buffer}}});Example 2
new ClientEncryption(mongoClient, {keyVaultNamespace: 'client.encryption',kmsProviders: {aws: {accessKeyId: AWS_ACCESS_KEY,secretAccessKey: AWS_SECRET_KEY}}});
property libmongocryptVersion
static readonly libmongocryptVersion: string;
method addKeyAltName
addKeyAltName: ( _id: Binary, keyAltName: string) => Promise<WithId<DataKey> | null>;
Adds a keyAltName to a key identified by the provided _id.
This method resolves to/returns the *old* key value (prior to adding the new altKeyName).
Parameter _id
The id of the document to update.
Parameter keyAltName
a keyAltName to search for a key
Returns
Returns a promise that either resolves to a DataKey if a document matches the key or null if no documents match the id. The promise rejects with an error if an error is thrown.
Example 1
// adding an keyAltName to a data keyconst id = new Binary(); // id is a bson binary subtype 4 objectconst keyAltName = 'keyAltName';const oldKey = await clientEncryption.addKeyAltName(id, keyAltName);if (!oldKey) {// null is returned if there is no matching document with an id matching the supplied id}
method createDataKey
createDataKey: ( provider: ClientEncryptionDataKeyProvider, options?: ClientEncryptionCreateDataKeyProviderOptions) => Promise<UUID>;
Creates a data key used for explicit encryption and inserts it into the key vault namespace
Example 1
// Using async/await to create a local keyconst dataKeyId = await clientEncryption.createDataKey('local');Example 2
// Using async/await to create an aws keyconst dataKeyId = await clientEncryption.createDataKey('aws', {masterKey: {region: 'us-east-1',key: 'xxxxxxxxxxxxxx' // CMK ARN here}});Example 3
// Using async/await to create an aws key with a keyAltNameconst dataKeyId = await clientEncryption.createDataKey('aws', {masterKey: {region: 'us-east-1',key: 'xxxxxxxxxxxxxx' // CMK ARN here},keyAltNames: [ 'mySpecialKey' ]});
method createEncryptedCollection
createEncryptedCollection: <TSchema extends Document = Document>( db: Db, name: string, options: { provider: ClientEncryptionDataKeyProvider; createCollectionOptions: Omit< CreateCollectionOptions, 'encryptedFields' > & { encryptedFields: Document }; masterKey?: | AWSEncryptionKeyOptions | AzureEncryptionKeyOptions | GCPEncryptionKeyOptions; }) => Promise<{ collection: Collection<TSchema>; encryptedFields: Document }>;
A convenience method for creating an encrypted collection. This method will create data keys for any encryptedFields that do not have a
keyId
defined and then create a new collection with the full set of encryptedFields.Parameter db
A Node.js driver Db object with which to create the collection
Parameter name
The name of the collection to be created
Parameter options
Options for createDataKey and for createCollection
Returns
created collection and generated encryptedFields
Throws
MongoCryptCreateDataKeyError - If part way through the process a createDataKey invocation fails, an error will be rejected that has the partial
encryptedFields
that were created.Throws
MongoCryptCreateEncryptedCollectionError - If creating the collection fails, an error will be rejected that has the entire
encryptedFields
that were created.
method decrypt
decrypt: <T = any>(value: Binary) => Promise<T>;
Explicitly decrypt a provided encrypted value
Parameter value
An encrypted value
Returns
a Promise that either resolves with the decrypted value, or rejects with an error
Example 1
// Decrypting value with async/await APIasync function decryptMyValue(value) {return clientEncryption.decrypt(value);}
method deleteKey
deleteKey: (_id: Binary) => Promise<DeleteResult>;
Deletes the key with the provided id from the keyvault, if it exists.
Example 1
// delete a key by _idconst id = new Binary(); // id is a bson binary subtype 4 objectconst { deletedCount } = await clientEncryption.deleteKey(id);if (deletedCount != null && deletedCount > 0) {// successful deletion}
method encrypt
encrypt: ( value: unknown, options: ClientEncryptionEncryptOptions) => Promise<Binary>;
Explicitly encrypt a provided value. Note that either
options.keyId
oroptions.keyAltName
must be specified. Specifying bothoptions.keyId
andoptions.keyAltName
is considered an error.Parameter value
The value that you wish to serialize. Must be of a type that can be serialized into BSON
Parameter options
Returns
a Promise that either resolves with the encrypted value, or rejects with an error.
Example 1
// Encryption with async/await apiasync function encryptMyData(value) {const keyId = await clientEncryption.createDataKey('local');return clientEncryption.encrypt(value, { keyId, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' });}Example 2
// Encryption using a keyAltNameasync function encryptMyData(value) {await clientEncryption.createDataKey('local', { keyAltNames: 'mySpecialKey' });return clientEncryption.encrypt(value, { keyAltName: 'mySpecialKey', algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' });}
method encryptExpression
encryptExpression: ( expression: Document, options: ClientEncryptionEncryptOptions) => Promise<Binary>;
Encrypts a Match Expression or Aggregate Expression to query a range index.
Only supported when queryType is "range" and algorithm is "Range".
Parameter expression
a BSON document of one of the following forms: 1. A Match Expression of this form:
{$and: [{<field>: {$gt: <value1>}}, {<field>: {$lt: <value2> }}]}
2. An Aggregate Expression of this form:{$and: [{$gt: [<fieldpath>, <value1>]}, {$lt: [<fieldpath>, <value2>]}]}
$gt
may also be$gte
.$lt
may also be$lte
.Parameter options
Returns
Returns a Promise that either resolves with the encrypted value or rejects with an error.
method getKey
getKey: (_id: Binary) => Promise<DataKey | null>;
Finds a key in the keyvault with the specified _id.
Returns a promise that either resolves to a DataKey if a document matches the key or null if no documents match the id. The promise rejects with an error if an error is thrown.
Example 1
// getting a key by idconst id = new Binary(); // id is a bson binary subtype 4 objectconst key = await clientEncryption.getKey(id);if (!key) {// key is null if there was no matching key}
method getKeyByAltName
getKeyByAltName: (keyAltName: string) => Promise<WithId<DataKey> | null>;
Finds a key in the keyvault which has the specified keyAltName.
Parameter keyAltName
a keyAltName to search for a key
Returns
Returns a promise that either resolves to a DataKey if a document matches the key or null if no documents match the keyAltName. The promise rejects with an error if an error is thrown.
Example 1
// get a key by alt nameconst keyAltName = 'keyAltName';const key = await clientEncryption.getKeyByAltName(keyAltName);if (!key) {// key is null if there is no matching key}
method getKeys
getKeys: () => FindCursor<DataKey>;
Finds all the keys currently stored in the keyvault.
This method will not throw.
Returns
a FindCursor over all keys in the keyvault.
Example 1
// fetching all keysconst keys = await clientEncryption.getKeys().toArray();
method removeKeyAltName
removeKeyAltName: ( _id: Binary, keyAltName: string) => Promise<WithId<DataKey> | null>;
Adds a keyAltName to a key identified by the provided _id.
This method resolves to/returns the *old* key value (prior to removing the new altKeyName).
If the removed keyAltName is the last keyAltName for that key, the
altKeyNames
property is unset from the document.Parameter _id
The id of the document to update.
Parameter keyAltName
a keyAltName to search for a key
Returns
Returns a promise that either resolves to a DataKey if a document matches the key or null if no documents match the id. The promise rejects with an error if an error is thrown.
Example 1
// removing a key alt name from a data keyconst id = new Binary(); // id is a bson binary subtype 4 objectconst keyAltName = 'keyAltName';const oldKey = await clientEncryption.removeKeyAltName(id, keyAltName);if (!oldKey) {// null is returned if there is no matching document with an id matching the supplied id}
method rewrapManyDataKey
rewrapManyDataKey: ( filter: Filter<DataKey>, options: ClientEncryptionRewrapManyDataKeyProviderOptions) => Promise<{ bulkWriteResult?: BulkWriteResult }>;
Searches the keyvault for any data keys matching the provided filter. If there are matches, rewrapManyDataKey then attempts to re-wrap the data keys using the provided options.
If no matches are found, then no bulk write is performed.
Example 1
// rewrapping all data data keys (using a filter that matches all documents)const filter = {};const result = await clientEncryption.rewrapManyDataKey(filter);if (result.bulkWriteResult != null) {// keys were re-wrapped, results will be available in the bulkWrite object.}Example 2
// attempting to rewrap all data keys with no matchesconst filter = { _id: new Binary() } // assume _id matches no documents in the databaseconst result = await clientEncryption.rewrapManyDataKey(filter);if (result.bulkWriteResult == null) {// no keys matched, `bulkWriteResult` does not exist on the result object}
class ClientSession
class ClientSession extends TypedEventEmitter<ClientSessionEvents> implements AsyncDisposable_2 {}
A class representing a client session on the server
NOTE: not meant to be instantiated directly.
Modifiers
@public
property clientOptions
clientOptions: MongoOptions;
property clusterTime
clusterTime?: ClusterTime;
property defaultTransactionOptions
defaultTransactionOptions: TransactionOptions;
property explicit
explicit: boolean;
property hasEnded
hasEnded: boolean;
property id
readonly id: ServerSessionId;
The server id associated with this session
property isPinned
readonly isPinned: boolean;
property loadBalanced
readonly loadBalanced: boolean;
property operationTime
operationTime?: Timestamp;
property serverSession
readonly serverSession: ServerSession;
property snapshotEnabled
readonly snapshotEnabled: boolean;
Whether or not this session is configured for snapshot reads
property supports
supports: { causalConsistency: boolean };
property timeoutMS
timeoutMS?: number;
Specifies the time an operation in a given
ClientSession
will run until it throws a timeout errorModifiers
@experimental
property transaction
transaction: Transaction;
method abortTransaction
abortTransaction: (options?: { timeoutMS?: number }) => Promise<void>;
Aborts the currently active transaction in this session.
Parameter options
Optional options, can be used to override
defaultTimeoutMS
.
method advanceClusterTime
advanceClusterTime: (clusterTime: ClusterTime) => void;
Advances the clusterTime for a ClientSession to the provided clusterTime of another ClientSession
Parameter clusterTime
the $clusterTime returned by the server from another session in the form of a document containing the
BSON.Timestamp
clusterTime and signature
method advanceOperationTime
advanceOperationTime: (operationTime: Timestamp) => void;
Advances the operationTime for a ClientSession.
Parameter operationTime
the
BSON.Timestamp
of the operation type it is desired to advance to
method commitTransaction
commitTransaction: (options?: { timeoutMS?: number }) => Promise<void>;
Commits the currently active transaction in this session.
Parameter options
Optional options, can be used to override
defaultTimeoutMS
.
method endSession
endSession: (options?: EndSessionOptions) => Promise<void>;
Frees any client-side resources held by the current session. If a session is in a transaction, the transaction is aborted.
Does not end the session on the server.
Parameter options
Optional settings. Currently reserved for future use
method equals
equals: (session: ClientSession) => boolean;
Used to determine if this session equals another
Parameter session
The session to compare to
method incrementTransactionNumber
incrementTransactionNumber: () => void;
Increment the transaction number on the internal ServerSession
method inTransaction
inTransaction: () => boolean;
Returns
whether this session is currently in a transaction or not
method startTransaction
startTransaction: (options?: TransactionOptions) => void;
Starts a new transaction with the given options.
Parameter options
Options for the transaction
Remarks
**IMPORTANT**: Running operations in parallel is not supported during a transaction. The use of
Promise.all
,Promise.allSettled
,Promise.race
, etc to parallelize operations inside a transaction is undefined behaviour.
method toBSON
toBSON: () => never;
This is here to ensure that ClientSession is never serialized to BSON.
method withTransaction
withTransaction: <T = any>( fn: WithTransactionCallback<T>, options?: TransactionOptions & { timeoutMS?: number }) => Promise<T>;
Starts a transaction and runs a provided function, ensuring the commitTransaction is always attempted when all operations run in the function have completed.
**IMPORTANT:** This method requires the function passed in to return a Promise. That promise must be made by
await
-ing all operations in such a way that rejections are propagated to the returned promise.**IMPORTANT:** Running operations in parallel is not supported during a transaction. The use of
Promise.all
,Promise.allSettled
,Promise.race
, etc to parallelize operations inside a transaction is undefined behaviour.**IMPORTANT:** When running an operation inside a
withTransaction
callback, if it is not provided the explicit session in its options, it will not be part of the transaction and it will not respect timeoutMS.Parameter fn
callback to run within a transaction
Parameter options
optional settings for the transaction
Returns
A raw command response or undefined
Remarks
- If all operations successfully complete and the
commitTransaction
operation is successful, then the provided function will return the result of the provided function. - If the transaction is unable to complete or an error is thrown from within the provided function, then the provided function will throw an error. - If the transaction is manually aborted within the provided function it will not throw. - If the driver needs to attempt to retry the operations, the provided function may be called multiple times.Checkout a descriptive example here:
See Also
https://www.mongodb.com/blog/post/quick-start-nodejs--mongodb--how-to-implement-transactions
If a command inside withTransaction fails: - It may cause the transaction on the server to be aborted. - This situation is normally handled transparently by the driver. - However, if the application catches such an error and does not rethrow it, the driver will not be able to determine whether the transaction was aborted or not. - The driver will then retry the transaction indefinitely.
To avoid this situation, the application must not silently handle errors within the provided function. If the application needs to handle errors within, it must await all operations such that if an operation is rejected it becomes the rejection of the callback function passed into withTransaction.
class Collection
class Collection<TSchema extends Document = Document> {}
The **Collection** class is an internal class that embodies a MongoDB collection allowing for insert/find/update/delete and other command operation on that MongoDB collection.
**COLLECTION Cannot directly be instantiated**
Example 1
import { MongoClient } from 'mongodb';interface Pet {name: string;kind: 'dog' | 'cat' | 'fish';}const client = new MongoClient('mongodb://localhost:27017');const pets = client.db().collection<Pet>('pets');const petCursor = pets.find();for await (const pet of petCursor) {console.log(`${pet.name} is a ${pet.kind}!`);}Modifiers
@public
property bsonOptions
readonly bsonOptions: BSONSerializeOptions;
property collectionName
readonly collectionName: string;
The name of this collection
property dbName
readonly dbName: string;
The name of the database this collection belongs to
property hint
hint: any;
The current index hint for the collection
property namespace
readonly namespace: string;
The namespace of this collection, in the format
${this.dbName}.${this.collectionName}
property readConcern
readonly readConcern: ReadConcern;
The current readConcern of the collection. If not explicitly defined for this collection, will be inherited from the parent DB
property readPreference
readonly readPreference: ReadPreference;
The current readPreference of the collection. If not explicitly defined for this collection, will be inherited from the parent DB
property timeoutMS
readonly timeoutMS: number;
property writeConcern
readonly writeConcern: WriteConcern;
The current writeConcern of the collection. If not explicitly defined for this collection, will be inherited from the parent DB
method aggregate
aggregate: <T extends Document = Document>( pipeline?: Document[], options?: AggregateOptions) => AggregationCursor<T>;
Execute an aggregation framework pipeline against the collection, needs MongoDB >= 2.2
Parameter pipeline
An array of aggregation pipelines to execute
Parameter options
Optional settings for the command
method bulkWrite
bulkWrite: ( operations: ReadonlyArray<AnyBulkWriteOperation<TSchema>>, options?: BulkWriteOptions) => Promise<BulkWriteResult>;
Perform a bulkWrite operation without a fluent API
Legal operation types are -
insertOne
-replaceOne
-updateOne
-updateMany
-deleteOne
-deleteMany
If documents passed in do not contain the **_id** field, one will be added to each of the documents missing it by the driver, mutating the document. This behavior can be overridden by setting the **forceServerObjectId** flag.
Parameter operations
Bulk operations to perform
Parameter options
Optional settings for the command
Throws
MongoDriverError if operations is not an array
method count
count: (filter?: Filter<TSchema>, options?: CountOptions) => Promise<number>;
An estimated count of matching documents in the db to a filter.
**NOTE:** This method has been deprecated, since it does not provide an accurate count of the documents in a collection. To obtain an accurate count of documents in the collection, use countDocuments. To obtain an estimated count of all documents in the collection, use estimatedDocumentCount.
Parameter filter
The filter for the count.
Parameter options
Optional settings for the command
Deprecated
use countDocuments or estimatedDocumentCount instead
method countDocuments
countDocuments: ( filter?: Filter<TSchema>, options?: CountDocumentsOptions) => Promise<number>;
Gets the number of documents matching the filter. For a fast count of the total documents in a collection see estimatedDocumentCount.
Due to countDocuments using the $match aggregation pipeline stage, certain query operators cannot be used in countDocuments. This includes the $where and $near query operators, among others. Details can be found in the documentation for the $match aggregation pipeline stage.
**Note**: When migrating from count to countDocuments the following query operators must be replaced:
| Operator | Replacement | | -------- | ----------- | |
$where
| [$expr
][1] | |$near
| [$geoWithin
][2] with [$center
][3] | |$nearSphere
| [$geoWithin
][2] with [$centerSphere
][4] |[1]: https://www.mongodb.com/docs/manual/reference/operator/query/expr/ [2]: https://www.mongodb.com/docs/manual/reference/operator/query/geoWithin/ [3]: https://www.mongodb.com/docs/manual/reference/operator/query/center/#op._S_center [4]: https://www.mongodb.com/docs/manual/reference/operator/query/centerSphere/#op._S_centerSphere
Parameter filter
The filter for the count
Parameter options
Optional settings for the command
See Also
https://www.mongodb.com/docs/manual/reference/operator/query/expr/
https://www.mongodb.com/docs/manual/reference/operator/query/geoWithin/
https://www.mongodb.com/docs/manual/reference/operator/query/center/#op._S_center
https://www.mongodb.com/docs/manual/reference/operator/query/centerSphere/#op._S_centerSphere
method createIndex
createIndex: ( indexSpec: IndexSpecification, options?: CreateIndexesOptions) => Promise<string>;
Creates an index on the db and collection collection.
Parameter indexSpec
The field name or index specification to create an index for
Parameter options
Optional settings for the command
Example 1
const collection = client.db('foo').collection('bar');await collection.createIndex({ a: 1, b: -1 });// Alternate syntax for { c: 1, d: -1 } that ensures order of indexesawait collection.createIndex([ [c, 1], [d, -1] ]);// Equivalent to { e: 1 }await collection.createIndex('e');// Equivalent to { f: 1, g: 1 }await collection.createIndex(['f', 'g'])// Equivalent to { h: 1, i: -1 }await collection.createIndex([ { h: 1 }, { i: -1 } ]);// Equivalent to { j: 1, k: -1, l: 2d }await collection.createIndex(['j', ['k', -1], { l: '2d' }])
method createIndexes
createIndexes: ( indexSpecs: IndexDescription[], options?: CreateIndexesOptions) => Promise<string[]>;
Creates multiple indexes in the collection, this method is only supported for MongoDB 2.6 or higher. Earlier version of MongoDB will throw a command not supported error.
**Note**: Unlike createIndex, this function takes in raw index specifications. Index specifications are defined here.
Parameter indexSpecs
An array of index specifications to be created
Parameter options
Optional settings for the command
Example 1
const collection = client.db('foo').collection('bar');await collection.createIndexes([// Simple index on field fizz{key: { fizz: 1 },}// wildcard index{key: { '$**': 1 }},// named index on darmok and jalad{key: { darmok: 1, jalad: -1 }name: 'tanagra'}]);
method createSearchIndex
createSearchIndex: (description: SearchIndexDescription) => Promise<string>;
Creates a single search index for the collection.
Parameter description
The index description for the new search index.
Returns
A promise that resolves to the name of the new search index.
Remarks
Only available when used against a 7.0+ Atlas cluster.
method createSearchIndexes
createSearchIndexes: ( descriptions: SearchIndexDescription[]) => Promise<string[]>;
Creates multiple search indexes for the current collection.
Parameter descriptions
An array of
SearchIndexDescription
s for the new search indexes.Returns
Remarks
Only available when used against a 7.0+ Atlas cluster.
method deleteMany
deleteMany: ( filter?: Filter<TSchema>, options?: DeleteOptions) => Promise<DeleteResult>;
Delete multiple documents from a collection
Parameter filter
The filter used to select the documents to remove
Parameter options
Optional settings for the command
method deleteOne
deleteOne: ( filter?: Filter<TSchema>, options?: DeleteOptions) => Promise<DeleteResult>;
Delete a document from a collection
Parameter filter
The filter used to select the document to remove
Parameter options
Optional settings for the command
method distinct
distinct: { <Key extends '_id' | keyof EnhancedOmit<TSchema, '_id'>>(key: Key): Promise< Array<Flatten<WithId<TSchema>[Key]>> >; <Key extends '_id' | keyof EnhancedOmit<TSchema, '_id'>>( key: Key, filter: Filter<TSchema> ): Promise<Flatten<WithId<TSchema>[Key]>[]>; <Key extends '_id' | keyof EnhancedOmit<TSchema, '_id'>>( key: Key, filter: Filter<TSchema>, options: CommandOperationOptions ): Promise<Flatten<WithId<TSchema>[Key]>[]>; (key: string): Promise<any[]>; (key: string, filter: Filter<TSchema>): Promise<any[]>; ( key: string, filter: Filter<TSchema>, options: CommandOperationOptions ): Promise<any[]>;};
The distinct command returns a list of distinct values for the given key across a collection.
Parameter key
Field of the document to find distinct values for
Parameter filter
The filter for filtering the set of documents to which we apply the distinct filter.
Parameter options
Optional settings for the command
method drop
drop: (options?: DropCollectionOptions) => Promise<boolean>;
Drop the collection from the database, removing it permanently. New accesses will create a new collection.
Parameter options
Optional settings for the command
method dropIndex
dropIndex: ( indexName: string, options?: DropIndexesOptions) => Promise<Document>;
Drops an index from this collection.
Parameter indexName
Name of the index to drop.
Parameter options
Optional settings for the command
method dropIndexes
dropIndexes: (options?: DropIndexesOptions) => Promise<boolean>;
Drops all indexes from this collection.
Parameter options
Optional settings for the command
method dropSearchIndex
dropSearchIndex: (name: string) => Promise<void>;
Deletes a search index by index name.
Parameter name
The name of the search index to be deleted.
Remarks
Only available when used against a 7.0+ Atlas cluster.
method estimatedDocumentCount
estimatedDocumentCount: ( options?: EstimatedDocumentCountOptions) => Promise<number>;
Gets an estimate of the count of documents in a collection using collection metadata. This will always run a count command on all server versions.
due to an oversight in versions 5.0.0-5.0.8 of MongoDB, the count command, which estimatedDocumentCount uses in its implementation, was not included in v1 of the Stable API, and so users of the Stable API with estimatedDocumentCount are recommended to upgrade their server version to 5.0.9+ or set apiStrict: false to avoid encountering errors.
Parameter options
Optional settings for the command
See Also
method find
find: { (): FindCursor<WithId<TSchema>>; (filter: Filter<TSchema>, options?: FindOptions<Document>): FindCursor< WithId<TSchema> >; <T extends Document>( filter: Filter<TSchema>, options?: FindOptions<Document> ): FindCursor<T>;};
Creates a cursor for a filter that can be used to iterate over results from MongoDB
Parameter filter
The filter predicate. If unspecified, then all documents in the collection will match the predicate
method findOne
findOne: { (): Promise<WithId<TSchema> | null>; (filter: Filter<TSchema>): Promise<WithId<TSchema>>; ( filter: Filter<TSchema>, options: Omit<FindOptions<Document>, 'timeoutMode'> ): Promise<WithId<TSchema>>; <T = TSchema>(): Promise<T>; <T = TSchema>(filter: Filter<TSchema>): Promise<T>; <T = TSchema>( filter: Filter<TSchema>, options?: Omit<FindOptions<Document>, 'timeoutMode'> ): Promise<T>;};
Fetches the first document that matches the filter
Parameter filter
Query for find Operation
Parameter options
Optional settings for the command
method findOneAndDelete
findOneAndDelete: { ( filter: Filter<TSchema>, options: FindOneAndDeleteOptions & { includeResultMetadata: true } ): Promise<ModifyResult<TSchema>>; ( filter: Filter<TSchema>, options: FindOneAndDeleteOptions & { includeResultMetadata: false } ): Promise<WithId<TSchema>>; (filter: Filter<TSchema>, options: FindOneAndDeleteOptions): Promise< WithId<TSchema> >; (filter: Filter<TSchema>): Promise<WithId<TSchema>>;};
Find a document and delete it in one atomic operation. Requires a write lock for the duration of the operation.
Parameter filter
The filter used to select the document to remove
Parameter options
Optional settings for the command
method findOneAndReplace
findOneAndReplace: { ( filter: Filter<TSchema>, replacement: WithoutId<TSchema>, options: FindOneAndReplaceOptions & { includeResultMetadata: true } ): Promise<ModifyResult<TSchema>>; ( filter: Filter<TSchema>, replacement: WithoutId<TSchema>, options: FindOneAndReplaceOptions & { includeResultMetadata: false } ): Promise<WithId<TSchema>>; ( filter: Filter<TSchema>, replacement: WithoutId<TSchema>, options: FindOneAndReplaceOptions ): Promise<WithId<TSchema>>; (filter: Filter<TSchema>, replacement: WithoutId<TSchema>): Promise< WithId<TSchema> >;};
Find a document and replace it in one atomic operation. Requires a write lock for the duration of the operation.
Parameter filter
The filter used to select the document to replace
Parameter replacement
The Document that replaces the matching document
Parameter options
Optional settings for the command
method findOneAndUpdate
findOneAndUpdate: { ( filter: Filter<TSchema>, update: UpdateFilter<TSchema>, options: FindOneAndUpdateOptions & { includeResultMetadata: true } ): Promise<ModifyResult<TSchema>>; ( filter: Filter<TSchema>, update: any, options: FindOneAndUpdateOptions & { includeResultMetadata: false } ): Promise<WithId<TSchema>>; ( filter: Filter<TSchema>, update: any, options: FindOneAndUpdateOptions ): Promise<WithId<TSchema>>; (filter: Filter<TSchema>, update: any): Promise<WithId<TSchema>>;};
Find a document and update it in one atomic operation. Requires a write lock for the duration of the operation.
Parameter filter
The filter used to select the document to update
Parameter update
Update operations to be performed on the document
Parameter options
Optional settings for the command
method indexes
indexes: { (options: IndexInformationOptions & { full?: true }): Promise< IndexDescriptionInfo[] >; ( options: IndexInformationOptions & { full: false } ): Promise<IndexDescriptionCompact>; (options: IndexInformationOptions): Promise<any[] | IndexDescriptionCompact>; (options?: AbstractCursorOptions): Promise<any[]>;};
Retrieve all the indexes on the collection.
Parameter options
Optional settings for the command
method indexExists
indexExists: ( indexes: string | string[], options?: ListIndexesOptions) => Promise<boolean>;
Checks if one or more indexes exist on the collection, fails on first non-existing index
Parameter indexes
One or more index names to check.
Parameter options
Optional settings for the command
method indexInformation
indexInformation: { (options: IndexInformationOptions & { full: true }): Promise< IndexDescriptionInfo[] >; ( options: IndexInformationOptions & { full?: false } ): Promise<IndexDescriptionCompact>; (options: IndexInformationOptions): Promise<any[] | IndexDescriptionCompact>; (): Promise<IndexDescriptionCompact>;};
Retrieves this collections index info.
Parameter options
Optional settings for the command
method initializeOrderedBulkOp
initializeOrderedBulkOp: (options?: BulkWriteOptions) => OrderedBulkOperation;
Initiate an In order bulk write operation. Operations will be serially executed in the order they are added, creating a new operation for each switch in types.
Throws
MongoNotConnectedError
Remarks
**NOTE:** MongoClient must be connected prior to calling this method due to a known limitation in this legacy implementation. However,
collection.bulkWrite()
provides an equivalent API that does not require prior connecting.
method initializeUnorderedBulkOp
initializeUnorderedBulkOp: ( options?: BulkWriteOptions) => UnorderedBulkOperation;
Initiate an Out of order batch write operation. All operations will be buffered into insert/update/remove commands executed out of order.
Throws
MongoNotConnectedError
Remarks
**NOTE:** MongoClient must be connected prior to calling this method due to a known limitation in this legacy implementation. However,
collection.bulkWrite()
provides an equivalent API that does not require prior connecting.
method insertMany
insertMany: ( docs: ReadonlyArray<OptionalUnlessRequiredId<TSchema>>, options?: BulkWriteOptions) => Promise<InsertManyResult<TSchema>>;
Inserts an array of documents into MongoDB. If documents passed in do not contain the **_id** field, one will be added to each of the documents missing it by the driver, mutating the document. This behavior can be overridden by setting the **forceServerObjectId** flag.
Parameter docs
The documents to insert
Parameter options
Optional settings for the command
method insertOne
insertOne: ( doc: OptionalUnlessRequiredId<TSchema>, options?: InsertOneOptions) => Promise<InsertOneResult<TSchema>>;
Inserts a single document into MongoDB. If documents passed in do not contain the **_id** field, one will be added to each of the documents missing it by the driver, mutating the document. This behavior can be overridden by setting the **forceServerObjectId** flag.
Parameter doc
The document to insert
Parameter options
Optional settings for the command
method isCapped
isCapped: (options?: OperationOptions) => Promise<boolean>;
Returns if the collection is a capped collection
Parameter options
Optional settings for the command
method listIndexes
listIndexes: (options?: ListIndexesOptions) => ListIndexesCursor;
Get the list of all indexes information for the collection.
Parameter options
Optional settings for the command
method listSearchIndexes
listSearchIndexes: { (options?: ListSearchIndexesOptions): ListSearchIndexesCursor; (name: string, options?: ListSearchIndexesOptions): ListSearchIndexesCursor;};
Returns all search indexes for the current collection.
Parameter options
The options for the list indexes operation.
Remarks
Only available when used against a 7.0+ Atlas cluster.
Returns all search indexes for the current collection.
Parameter name
The name of the index to search for. Only indexes with matching index names will be returned.
Parameter options
The options for the list indexes operation.
Remarks
Only available when used against a 7.0+ Atlas cluster.
method options
options: (options?: OperationOptions) => Promise<Document>;
Returns the options of the collection.
Parameter options
Optional settings for the command
method rename
rename: (newName: string, options?: RenameOptions) => Promise<Collection>;
Rename the collection.
Parameter newName
New name of of the collection.
Parameter options
Optional settings for the command
Remarks
This operation does not inherit options from the Db or MongoClient.
method replaceOne
replaceOne: ( filter: Filter<TSchema>, replacement: WithoutId<TSchema>, options?: ReplaceOptions) => Promise<UpdateResult<TSchema> | Document>;
Replace a document in a collection with another document
Parameter filter
The filter used to select the document to replace
Parameter replacement
The Document that replaces the matching document
Parameter options
Optional settings for the command
method updateMany
updateMany: ( filter: Filter<TSchema>, update: UpdateFilter<TSchema> | Document[], options?: UpdateOptions) => Promise<UpdateResult<TSchema>>;
Update multiple documents in a collection
The value of
update
can be either: - UpdateFilter - A document that contains update operator expressions, - Document[] - an aggregation pipeline.Parameter filter
The filter used to select the document to update
Parameter update
The modifications to apply
Parameter options
Optional settings for the command
method updateOne
updateOne: ( filter: Filter<TSchema>, update: UpdateFilter<TSchema> | Document[], options?: UpdateOptions) => Promise<UpdateResult<TSchema>>;
Update a single document in a collection
The value of
update
can be either: - UpdateFilter - A document that contains update operator expressions, - Document[] - an aggregation pipeline.Parameter filter
The filter used to select the document to update
Parameter update
The modifications to apply
Parameter options
Optional settings for the command
method updateSearchIndex
updateSearchIndex: (name: string, definition: Document) => Promise<void>;
Updates a search index by replacing the existing index definition with the provided definition.
Parameter name
The name of the search index to update.
Parameter definition
The new search index definition.
Remarks
Only available when used against a 7.0+ Atlas cluster.
method watch
watch: < TLocal extends Document = TSchema, TChange extends Document = ChangeStreamDocument<TLocal>>( pipeline?: Document[], options?: ChangeStreamOptions) => ChangeStream<TLocal, TChange>;
Create a new Change Stream, watching for new changes (insertions, updates, replacements, deletions, and invalidations) in this collection.
Parameter pipeline
An array of aggregation pipeline stages through which to pass change stream documents. This allows for filtering (using $match) and manipulating the change stream documents.
Parameter options
Optional settings for the command
Remarks
When
timeoutMS
is configured for a change stream, it will have different behaviour depending on whether the change stream is in iterator mode or emitter mode. In both cases, a change stream will time out if it does not receive a change event withintimeoutMS
of the last change event.Note that if a change stream is consistently timing out when watching a collection, database or client that is being changed, then this may be due to the server timing out before it can finish processing the existing oplog. To address this, restart the change stream with a higher
timeoutMS
.If the change stream times out the initial aggregate operation to establish the change stream on the server, then the client will close the change stream. If the getMore calls to the server time out, then the change stream will be left open, but will throw a MongoOperationTimeoutError when in iterator mode and emit an error event that returns a MongoOperationTimeoutError in emitter mode.
To determine whether or not the change stream is still open following a timeout, check the ChangeStream.closed getter.
Example 1
By just providing the first argument I can type the change to be
ChangeStreamDocument<{ _id: number }>
collection.watch<{ _id: number }>().on('change', change => console.log(change._id.toFixed(4)));Example 2
Passing a second argument provides a way to reflect the type changes caused by an advanced pipeline. Here, we are using a pipeline to have MongoDB filter for insert changes only and add a comment. No need start from scratch on the ChangeStreamInsertDocument type! By using an intersection we can save time and ensure defaults remain the same type!
collection.watch<Schema, ChangeStreamInsertDocument<Schema> & { comment: string }>([{ $addFields: { comment: 'big changes' } },{ $match: { operationType: 'insert' } }]).on('change', change => {change.comment.startsWith('big');change.operationType === 'insert';// No need to narrow in code because the generics did that for us!expectType<Schema>(change.fullDocument);});Example 3
In iterator mode, if a next() call throws a timeout error, it will attempt to resume the change stream. The next call can just be retried after this succeeds.
const changeStream = collection.watch([], { timeoutMS: 100 });try {await changeStream.next();} catch (e) {if (e instanceof MongoOperationTimeoutError && !changeStream.closed) {await changeStream.next();}throw e;}Example 4
In emitter mode, if the change stream goes
timeoutMS
without emitting a change event, it will emit an error event that returns a MongoOperationTimeoutError, but will not close the change stream unless the resume attempt fails. There is no need to re-establish change listeners as this will automatically continue emitting change events once the resume attempt completes.const changeStream = collection.watch([], { timeoutMS: 100 });changeStream.on('change', console.log);changeStream.on('error', e => {if (e instanceof MongoOperationTimeoutError && !changeStream.closed) {// do nothing} else {changeStream.close();}});
class CommandFailedEvent
class CommandFailedEvent {}
An event indicating the failure of a given command
Event
Modifiers
@public
property address
address: string;
property commandName
commandName: string;
property connectionId
connectionId?: string | number;
Driver generated connection id
property duration
duration: number;
property failure
failure: Error;
property hasServiceId
readonly hasServiceId: boolean;
property requestId
requestId: number;
property serverConnectionId
serverConnectionId: BigInt;
Server generated connection id Distinct from the connection id and is returned by the hello or legacy hello response as "connectionId" from the server on 4.2+.
property serviceId
serviceId?: ObjectId;
class CommandStartedEvent
class CommandStartedEvent {}
An event indicating the start of a given command
Event
Modifiers
@public
property address
address: string;
property command
command: Document;
property commandName
commandName: string;
property commandObj
commandObj?: Document;
property connectionId
connectionId?: string | number;
Driver generated connection id
property databaseName
databaseName: string;
property hasServiceId
readonly hasServiceId: boolean;
property requestId
requestId: number;
property serverConnectionId
serverConnectionId: BigInt;
Server generated connection id Distinct from the connection id and is returned by the hello or legacy hello response as "connectionId" from the server on 4.2+.
property serviceId
serviceId?: ObjectId;
class CommandSucceededEvent
class CommandSucceededEvent {}
An event indicating the success of a given command
Event
Modifiers
@public
property address
address: string;
property commandName
commandName: string;
property connectionId
connectionId?: string | number;
Driver generated connection id
property duration
duration: number;
property hasServiceId
readonly hasServiceId: boolean;
property reply
reply: {};
property requestId
requestId: number;
property serverConnectionId
serverConnectionId: BigInt;
Server generated connection id Distinct from the connection id and is returned by the hello or legacy hello response as "connectionId" from the server on 4.2+.
property serviceId
serviceId?: ObjectId;
class ConnectionCheckedInEvent
class ConnectionCheckedInEvent extends ConnectionPoolMonitoringEvent {}
An event published when a connection is checked into the connection pool
Event
Modifiers
@public
property connectionId
connectionId: number | '<monitor>';
The id of the connection
class ConnectionCheckedOutEvent
class ConnectionCheckedOutEvent extends ConnectionPoolMonitoringEvent {}
An event published when a connection is checked out of the connection pool
Event
Modifiers
@public
property connectionId
connectionId: number | '<monitor>';
The id of the connection
property durationMS
durationMS: number;
The time it took to check out the connection. More specifically, the time elapsed between emitting a
ConnectionCheckOutStartedEvent
and emitting this event as part of the same checking out.
class ConnectionCheckOutFailedEvent
class ConnectionCheckOutFailedEvent extends ConnectionPoolMonitoringEvent {}
An event published when a request to check a connection out fails
Event
Modifiers
@public
property durationMS
durationMS: number;
The time it took to check out the connection. More specifically, the time elapsed between emitting a
ConnectionCheckOutStartedEvent
and emitting this event as part of the same check out.
property reason
reason: string;
The reason the attempt to check out failed
class ConnectionCheckOutStartedEvent
class ConnectionCheckOutStartedEvent extends ConnectionPoolMonitoringEvent {}
An event published when a request to check a connection out begins
Event
Modifiers
@public
class ConnectionClosedEvent
class ConnectionClosedEvent extends ConnectionPoolMonitoringEvent {}
An event published when a connection is closed
Event
Modifiers
@public
property connectionId
connectionId: number | '<monitor>';
The id of the connection
property reason
reason: string;
The reason the connection was closed
property serviceId
serviceId?: ObjectId;
class ConnectionCreatedEvent
class ConnectionCreatedEvent extends ConnectionPoolMonitoringEvent {}
An event published when a connection pool creates a new connection
Event
Modifiers
@public
property connectionId
connectionId: number | '<monitor>';
A monotonically increasing, per-pool id for the newly created connection
class ConnectionPoolClearedEvent
class ConnectionPoolClearedEvent extends ConnectionPoolMonitoringEvent {}
An event published when a connection pool is cleared
Event
Modifiers
@public
property interruptInUseConnections
interruptInUseConnections?: boolean;
class ConnectionPoolClosedEvent
class ConnectionPoolClosedEvent extends ConnectionPoolMonitoringEvent {}
An event published when a connection pool is closed
Event
Modifiers
@public
class ConnectionPoolCreatedEvent
class ConnectionPoolCreatedEvent extends ConnectionPoolMonitoringEvent {}
An event published when a connection pool is created
Event
Modifiers
@public
property options
options: Pick< ConnectionPoolOptions, | 'maxPoolSize' | 'minPoolSize' | 'maxConnecting' | 'maxIdleTimeMS' | 'waitQueueTimeoutMS'>;
The options used to create this connection pool
class ConnectionPoolMonitoringEvent
abstract class ConnectionPoolMonitoringEvent {}
The base export class for all monitoring events published from the connection pool
Event
Modifiers
@public
class ConnectionPoolReadyEvent
class ConnectionPoolReadyEvent extends ConnectionPoolMonitoringEvent {}
An event published when a connection pool is ready
Event
Modifiers
@public
class ConnectionReadyEvent
class ConnectionReadyEvent extends ConnectionPoolMonitoringEvent {}
An event published when a connection is ready for use
Event
Modifiers
@public
property connectionId
connectionId: number | '<monitor>';
The id of the connection
property durationMS
durationMS: number;
The time it took to establish the connection. In accordance with the definition of establishment of a connection specified by
ConnectionPoolOptions.maxConnecting
, it is the time elapsed between emitting aConnectionCreatedEvent
and emitting this event as part of the same checking out.Naturally, when establishing a connection is part of checking out, this duration is not greater than
ConnectionCheckedOutEvent.duration
.
class Db
class Db {}
The **Db** class is a class that represents a MongoDB Database.
Example 1
import { MongoClient } from 'mongodb';interface Pet {name: string;kind: 'dog' | 'cat' | 'fish';}const client = new MongoClient('mongodb://localhost:27017');const db = client.db();// Create a collection that validates our unionawait db.createCollection<Pet>('pets', {validator: { $expr: { $in: ['$kind', ['dog', 'cat', 'fish']] } }})Modifiers
@public
constructor
constructor(client: MongoClient, databaseName: string, options?: DbOptions);
Creates a new Db instance.
Db name cannot contain a dot, the server may apply more restrictions when an operation is run.
Parameter client
The MongoClient for the database.
Parameter databaseName
The name of the database this instance represents.
Parameter options
Optional settings for Db construction.
property bsonOptions
readonly bsonOptions: BSONSerializeOptions;
property databaseName
readonly databaseName: string;
property namespace
readonly namespace: string;
property options
readonly options: DbOptions;
property readConcern
readonly readConcern: ReadConcern;
property readPreference
readonly readPreference: ReadPreference;
The current readPreference of the Db. If not explicitly defined for this Db, will be inherited from the parent MongoClient
property secondaryOk
readonly secondaryOk: boolean;
Check if a secondary can be used (because the read preference is *not* set to primary)
property SYSTEM_COMMAND_COLLECTION
static SYSTEM_COMMAND_COLLECTION: string;
property SYSTEM_INDEX_COLLECTION
static SYSTEM_INDEX_COLLECTION: string;
property SYSTEM_JS_COLLECTION
static SYSTEM_JS_COLLECTION: string;
property SYSTEM_NAMESPACE_COLLECTION
static SYSTEM_NAMESPACE_COLLECTION: string;
property SYSTEM_PROFILE_COLLECTION
static SYSTEM_PROFILE_COLLECTION: string;
property SYSTEM_USER_COLLECTION
static SYSTEM_USER_COLLECTION: string;
property timeoutMS
readonly timeoutMS: number;
property writeConcern
readonly writeConcern: WriteConcern;
method admin
admin: () => Admin;
Return the Admin db instance
method aggregate
aggregate: <T extends Document = Document>( pipeline?: Document[], options?: AggregateOptions) => AggregationCursor<T>;
Execute an aggregation framework pipeline against the database.
Parameter pipeline
An array of aggregation stages to be executed
Parameter options
Optional settings for the command
method collection
collection: <TSchema extends Document = Document>( name: string, options?: CollectionOptions) => Collection<TSchema>;
Returns a reference to a MongoDB Collection. If it does not exist it will be created implicitly.
Collection namespace validation is performed server-side.
Parameter name
the collection name we wish to access.
Returns
return the new Collection instance
method collections
collections: (options?: ListCollectionsOptions) => Promise<Collection[]>;
Fetch all collections for the current db.
Parameter options
Optional settings for the command
method command
command: (command: Document, options?: RunCommandOptions) => Promise<Document>;
Execute a command
Parameter command
The command to run
Parameter options
Optional settings for the command
Remarks
This command does not inherit options from the MongoClient.
The driver will ensure the following fields are attached to the command sent to the server: -
lsid
- sourced from an implicit session or options.session -$readPreference
- defaults to primary or can be configured by options.readPreference -$db
- sourced from the name of this databaseIf the client has a serverApi setting: -
apiVersion
-apiStrict
-apiDeprecationErrors
When in a transaction: -
readConcern
- sourced from readConcern set on the TransactionOptions -writeConcern
- sourced from writeConcern set on the TransactionOptionsAttaching any of the above fields to the command will have no effect as the driver will overwrite the value.
method createCollection
createCollection: <TSchema extends Document = Document>( name: string, options?: CreateCollectionOptions) => Promise<Collection<TSchema>>;
Create a new collection on a server with the specified options. Use this to create capped collections. More information about command options available at https://www.mongodb.com/docs/manual/reference/command/create/
Collection namespace validation is performed server-side.
Parameter name
The name of the collection to create
Parameter options
Optional settings for the command
method createIndex
createIndex: ( name: string, indexSpec: IndexSpecification, options?: CreateIndexesOptions) => Promise<string>;
Creates an index on the db and collection.
Parameter name
Name of the collection to create the index on.
Parameter indexSpec
Specify the field to index, or an index specification
Parameter options
Optional settings for the command
method dropCollection
dropCollection: ( name: string, options?: DropCollectionOptions) => Promise<boolean>;
Drop a collection from the database, removing it permanently. New accesses will create a new collection.
Parameter name
Name of collection to drop
Parameter options
Optional settings for the command
method dropDatabase
dropDatabase: (options?: DropDatabaseOptions) => Promise<boolean>;
Drop a database, removing it permanently from the server.
Parameter options
Optional settings for the command
method indexInformation
indexInformation: { (name: string, options: IndexInformationOptions & { full: true }): Promise< IndexDescriptionInfo[] >; ( name: string, options: IndexInformationOptions & { full?: false } ): Promise<IndexDescriptionCompact>; (name: string, options: IndexInformationOptions): Promise< any[] | IndexDescriptionCompact >; (name: string): Promise<IndexDescriptionCompact>;};
Retrieves this collections index info.
Parameter name
The name of the collection.
Parameter options
Optional settings for the command
method listCollections
listCollections: { ( filter: Document, options: Exclude<ListCollectionsOptions, 'nameOnly'> & { nameOnly: true } ): ListCollectionsCursor<Pick<CollectionInfo, 'name' | 'type'>>; ( filter: Document, options: ListCollectionsOptions & { nameOnly: false } ): ListCollectionsCursor<CollectionInfo>; < T extends CollectionInfo | Pick<CollectionInfo, 'name' | 'type'> = | CollectionInfo | Pick<CollectionInfo, 'name' | 'type'> >( filter?: Document, options?: ListCollectionsOptions ): ListCollectionsCursor<T>;};
List all collections of this database with optional filter
Parameter filter
Query to filter collections by
Parameter options
Optional settings for the command
method profilingLevel
profilingLevel: (options?: ProfilingLevelOptions) => Promise<string>;
Retrieve the current profiling Level for MongoDB
Parameter options
Optional settings for the command
method removeUser
removeUser: (username: string, options?: RemoveUserOptions) => Promise<boolean>;
Remove a user from a database
Parameter username
The username to remove
Parameter options
Optional settings for the command
method renameCollection
renameCollection: <TSchema extends Document = Document>( fromCollection: string, toCollection: string, options?: RenameOptions) => Promise<Collection<TSchema>>;
Rename a collection.
Parameter fromCollection
Name of current collection to rename
Parameter toCollection
New name of of the collection
Parameter options
Optional settings for the command
Remarks
This operation does not inherit options from the MongoClient.
method runCursorCommand
runCursorCommand: ( command: Document, options?: RunCursorCommandOptions) => RunCommandCursor;
A low level cursor API providing basic driver functionality: - ClientSession management - ReadPreference for server selection - Running getMores automatically when a local batch is exhausted
Parameter command
The command that will start a cursor on the server.
Parameter options
Configurations for running the command, bson options will apply to getMores
method setProfilingLevel
setProfilingLevel: ( level: ProfilingLevel, options?: SetProfilingLevelOptions) => Promise<ProfilingLevel>;
Set the current profiling level of MongoDB
Parameter level
The new profiling level (off, slow_only, all).
Parameter options
Optional settings for the command
method stats
stats: (options?: DbStatsOptions) => Promise<Document>;
Get all the db statistics.
Parameter options
Optional settings for the command
method watch
watch: < TSchema extends Document = Document, TChange extends Document = ChangeStreamDocument<TSchema>>( pipeline?: Document[], options?: ChangeStreamOptions) => ChangeStream<TSchema, TChange>;
Create a new Change Stream, watching for new changes (insertions, updates, replacements, deletions, and invalidations) in this database. Will ignore all changes to system collections.
Parameter pipeline
An array of aggregation pipeline stages through which to pass change stream documents. This allows for filtering (using $match) and manipulating the change stream documents.
Parameter options
Optional settings for the command
Remarks
When
timeoutMS
is configured for a change stream, it will have different behaviour depending on whether the change stream is in iterator mode or emitter mode. In both cases, a change stream will time out if it does not receive a change event withintimeoutMS
of the last change event.Note that if a change stream is consistently timing out when watching a collection, database or client that is being changed, then this may be due to the server timing out before it can finish processing the existing oplog. To address this, restart the change stream with a higher
timeoutMS
.If the change stream times out the initial aggregate operation to establish the change stream on the server, then the client will close the change stream. If the getMore calls to the server time out, then the change stream will be left open, but will throw a MongoOperationTimeoutError when in iterator mode and emit an error event that returns a MongoOperationTimeoutError in emitter mode.
To determine whether or not the change stream is still open following a timeout, check the ChangeStream.closed getter.
Example 1
In iterator mode, if a next() call throws a timeout error, it will attempt to resume the change stream. The next call can just be retried after this succeeds.
const changeStream = collection.watch([], { timeoutMS: 100 });try {await changeStream.next();} catch (e) {if (e instanceof MongoOperationTimeoutError && !changeStream.closed) {await changeStream.next();}throw e;}Example 2
In emitter mode, if the change stream goes
timeoutMS
without emitting a change event, it will emit an error event that returns a MongoOperationTimeoutError, but will not close the change stream unless the resume attempt fails. There is no need to re-establish change listeners as this will automatically continue emitting change events once the resume attempt completes.const changeStream = collection.watch([], { timeoutMS: 100 });changeStream.on('change', console.log);changeStream.on('error', e => {if (e instanceof MongoOperationTimeoutError && !changeStream.closed) {// do nothing} else {changeStream.close();}});
class ExplainableCursor
abstract class ExplainableCursor<TSchema> extends AbstractCursor<TSchema> {}
A base class for any cursors that have
explain()
methods.Modifiers
@public
method explain
abstract explain: { (): Promise<Document>; (verbosity: ExplainVerbosityLike | ExplainCommandOptions): Promise<Document>; (options: { timeoutMS?: number }): Promise<Document>; ( verbosity: ExplainVerbosityLike | ExplainCommandOptions, options: { timeoutMS?: number } ): Promise<Document>; ( verbosity?: | ExplainVerbosityLike | ExplainCommandOptions | { timeoutMS?: number }, options?: { timeoutMS?: number } ): Promise<Document>;};
Execute the explain for the cursor
method resolveExplainTimeoutOptions
protected resolveExplainTimeoutOptions: ( verbosity?: | ExplainVerbosityLike | ExplainCommandOptions | { timeoutMS?: number }, options?: { timeoutMS?: number }) => { timeout?: { timeoutMS?: number }; explain?: ExplainVerbosityLike | ExplainCommandOptions;};
class FindCursor
class FindCursor<TSchema = any> extends ExplainableCursor<TSchema> {}
Modifiers
@public
method addQueryModifier
addQueryModifier: ( name: string, value: string | boolean | number | Document) => this;
Add a query modifier to the cursor query
Parameter name
The query modifier (must start with $, such as $orderby etc)
Parameter value
The modifier value.
method allowDiskUse
allowDiskUse: (allow?: boolean) => this;
Allows disk use for blocking sort operations exceeding 100MB memory. (MongoDB 3.2 or higher)
Remarks
method clone
clone: () => FindCursor<TSchema>;
method collation
collation: (value: CollationOptions) => this;
Set the collation options for the cursor.
Parameter value
The cursor collation options (MongoDB 3.4 or higher) settings for update operation (see 3.4 documentation for available fields).
method comment
comment: (value: string) => this;
Add a comment to the cursor query allowing for tracking the comment in the log.
Parameter value
The comment attached to this query.
method count
count: (options?: CountOptions) => Promise<number>;
Get the count of documents for this cursor
Deprecated
Use
collection.estimatedDocumentCount
orcollection.countDocuments
instead
method explain
explain: { (): Promise<Document>; (verbosity: ExplainVerbosityLike | ExplainCommandOptions): Promise<Document>; (options: { timeoutMS?: number }): Promise<Document>; ( verbosity: ExplainVerbosityLike | ExplainCommandOptions, options: { timeoutMS?: number } ): Promise<Document>;};
Execute the explain for the cursor
method filter
filter: (filter: Document) => this;
Set the cursor query
method hint
hint: (hint: Hint) => this;
Set the cursor hint
Parameter hint
If specified, then the query system will only consider plans using the hinted index.
method limit
limit: (value: number) => this;
Set the limit for the cursor.
Parameter value
The limit for the cursor query.
method map
map: <T>(transform: (doc: TSchema) => T) => FindCursor<T>;
method max
max: (max: Document) => this;
Set the cursor max
Parameter max
Specify a $max value to specify the exclusive upper bound for a specific index in order to constrain the results of find(). The $max specifies the upper bound for all keys of a specific index in order.
method maxAwaitTimeMS
maxAwaitTimeMS: (value: number) => this;
Set a maxAwaitTimeMS on a tailing cursor query to allow to customize the timeout value for the option awaitData (Only supported on MongoDB 3.2 or higher, ignored otherwise)
Parameter value
Number of milliseconds to wait before aborting the tailed query.
method maxTimeMS
maxTimeMS: (value: number) => this;
Set a maxTimeMS on the cursor query, allowing for hard timeout limits on queries (Only supported on MongoDB 2.6 or higher)
Parameter value
Number of milliseconds to wait before aborting the query.
method min
min: (min: Document) => this;
Set the cursor min
Parameter min
Specify a $min value to specify the inclusive lower bound for a specific index in order to constrain the results of find(). The $min specifies the lower bound for all keys of a specific index in order.
method project
project: <T extends Document = Document>(value: Document) => FindCursor<T>;
Add a project stage to the aggregation pipeline
Remarks
**Note for Typescript Users:** adding a transform changes the return type of the iteration of this cursor, it **does not** return a new instance of a cursor. This means when calling project, you should always assign the result to a new variable in order to get a correctly typed cursor variable. Take note of the following example:
Example 1
// Best wayconst docs: FindCursor<{ a: number }> = cursor.project<{ a: number }>({ _id: 0, a: true });// Flexible wayconst docs: FindCursor<Document> = cursor.project({ _id: 0, a: true });Example 2
const cursor: FindCursor<{ a: number; b: string }> = coll.find();const projectCursor = cursor.project<{ a: number }>({ _id: 0, a: true });const aPropOnlyArray: {a: number}[] = await projectCursor.toArray();// or always use chaining and save the final cursorconst cursor = coll.find().project<{ a: string }>({_id: 0,a: { $convert: { input: '$a', to: 'string' }}});
method returnKey
returnKey: (value: boolean) => this;
Set the cursor returnKey. If set to true, modifies the cursor to only return the index field or fields for the results of the query, rather than documents. If set to true and the query does not use an index to perform the read operation, the returned documents will not contain any fields.
Parameter value
the returnKey value.
method showRecordId
showRecordId: (value: boolean) => this;
Modifies the output of a query by adding a field $recordId to matching documents. $recordId is the internal key which uniquely identifies a document in a collection.
Parameter value
The $showDiskLoc option has now been deprecated and replaced with the showRecordId field. $showDiskLoc will still be accepted for OP_QUERY stye find.
method skip
skip: (value: number) => this;
Set the skip for the cursor.
Parameter value
The skip for the cursor query.
method sort
sort: (sort: Sort | string, direction?: SortDirection) => this;
Sets the sort order of the cursor query.
Parameter sort
The key or keys set for the sort.
Parameter direction
The direction of the sorting (1 or -1).
class FindOperators
class FindOperators {}
A builder object that is returned from BulkOperationBase#find. Is used to build a write operation that involves a query filter.
Modifiers
@public
property bulkOperation
bulkOperation: BulkOperationBase;
method arrayFilters
arrayFilters: (arrayFilters: Document[]) => this;
Specifies arrayFilters for UpdateOne or UpdateMany bulk operations.
method collation
collation: (collation: CollationOptions) => this;
Specifies the collation for the query condition.
method delete
delete: () => BulkOperationBase;
Add a delete many operation to the bulk operation
method deleteOne
deleteOne: () => BulkOperationBase;
Add a delete one operation to the bulk operation
method hint
hint: (hint: Hint) => this;
Specifies hint for the bulk operation.
method replaceOne
replaceOne: (replacement: Document) => BulkOperationBase;
Add a replace one operation to the bulk operation
method update
update: (updateDocument: Document | Document[]) => BulkOperationBase;
Add a multiple update operation to the bulk operation
method updateOne
updateOne: (updateDocument: Document | Document[]) => BulkOperationBase;
Add a single update operation to the bulk operation
method upsert
upsert: () => this;
Upsert modifier for update bulk operation, noting that this operation is an upsert.
class GridFSBucket
class GridFSBucket extends TypedEventEmitter<GridFSBucketEvents> {}
Constructor for a streaming GridFS interface
Modifiers
@public
constructor
constructor(db: Db, options?: GridFSBucketOptions);
property INDEX
static readonly INDEX: string;
When the first call to openUploadStream is made, the upload stream will check to see if it needs to create the proper indexes on the chunks and files collections. This event is fired either when 1) it determines that no index creation is necessary, 2) when it successfully creates the necessary indexes.
method delete
delete: (id: ObjectId, options?: { timeoutMS: number }) => Promise<void>;
Deletes a file with the given id
Parameter id
The id of the file doc
method drop
drop: (options?: { timeoutMS: number }) => Promise<void>;
Removes this bucket's files collection, followed by its chunks collection.
method find
find: ( filter?: Filter<GridFSFile>, options?: FindOptions) => FindCursor<GridFSFile>;
Convenience wrapper around find on the files collection
method openDownloadStream
openDownloadStream: ( id: ObjectId, options?: GridFSBucketReadStreamOptions) => GridFSBucketReadStream;
Returns a readable stream (GridFSBucketReadStream) for streaming file data from GridFS.
method openDownloadStreamByName
openDownloadStreamByName: ( filename: string, options?: GridFSBucketReadStreamOptionsWithRevision) => GridFSBucketReadStream;
Returns a readable stream (GridFSBucketReadStream) for streaming the file with the given name from GridFS. If there are multiple files with the same name, this will stream the most recent file with the given name (as determined by the
uploadDate
field). You can set therevision
option to change this behavior.
method openUploadStream
openUploadStream: ( filename: string, options?: GridFSBucketWriteStreamOptions) => GridFSBucketWriteStream;
Returns a writable stream (GridFSBucketWriteStream) for writing buffers to GridFS. The stream's 'id' property contains the resulting file's id.
Parameter filename
The value of the 'filename' key in the files doc
Parameter options
Optional settings.
method openUploadStreamWithId
openUploadStreamWithId: ( id: ObjectId, filename: string, options?: GridFSBucketWriteStreamOptions) => GridFSBucketWriteStream;
Returns a writable stream (GridFSBucketWriteStream) for writing buffers to GridFS for a custom file id. The stream's 'id' property contains the resulting file's id.
method rename
rename: ( id: ObjectId, filename: string, options?: { timeoutMS: number }) => Promise<void>;
Renames the file with the given _id to the given string
Parameter id
the id of the file to rename
Parameter filename
new name for the file
class GridFSBucketReadStream
class GridFSBucketReadStream extends Readable {}
A readable stream that enables you to read buffers from GridFS.
Do not instantiate this class directly. Use
openDownloadStream()
instead.Modifiers
@public
property FILE
static readonly FILE: string;
Fires when the stream loaded the file document corresponding to the provided id.
method abort
abort: () => Promise<void>;
Marks this stream as aborted (will never push another
data
event) and kills the underlying cursor. Will emit the 'end' event, and then the 'close' event once the cursor is successfully killed.
method end
end: (end?: number) => this;
Sets the 0-based offset in bytes to start streaming from. Throws an error if this stream has entered flowing mode (e.g. if you've already called
on('data')
)Parameter end
Offset in bytes to stop reading at
method start
start: (start?: number) => this;
Sets the 0-based offset in bytes to start streaming from. Throws an error if this stream has entered flowing mode (e.g. if you've already called
on('data')
)Parameter start
0-based offset in bytes to start streaming from
class GridFSBucketWriteStream
class GridFSBucketWriteStream extends Writable {}
A writable stream that enables you to write buffers to GridFS.
Do not instantiate this class directly. Use
openUploadStream()
instead.Modifiers
@public
property bucket
bucket: GridFSBucket;
property bufToStore
bufToStore: Buffer;
Space used to store a chunk currently being inserted
property chunks
chunks: Collection<GridFSChunk>;
A Collection instance where the file's chunks are stored
property chunkSizeBytes
chunkSizeBytes: number;
The number of bytes that each chunk will be limited to
property done
done: boolean;
Indicates the stream is finished uploading
property filename
filename: string;
The name of the file
property files
files: Collection<GridFSFile>;
A Collection instance where the file's GridFSFile document is stored
property gridFSFile
gridFSFile: GridFSFile;
The document containing information about the inserted file. This property is defined _after_ the finish event has been emitted. It will remain
null
if an error occurs.Example 1
fs.createReadStream('file.txt').pipe(bucket.openUploadStream('file.txt')).on('finish', function () {console.log(this.gridFSFile)})
property id
id: ObjectId;
The ObjectId used for the
_id
field on the GridFSFile document
property length
length: number;
Accumulates the number of bytes inserted as the stream uploads chunks
property n
n: number;
Accumulates the number of chunks inserted as the stream uploads file contents
property options
options: GridFSBucketWriteStreamOptions;
Options controlling the metadata inserted along with the file
property pos
pos: number;
Tracks the current offset into the buffered bytes being uploaded
property state
state: { streamEnd: boolean; outstandingRequests: number; errored: boolean; aborted: boolean;};
Contains a number of properties indicating the current state of the stream
property writeConcern
writeConcern?: WriteConcern;
The write concern setting to be used with every insert operation
method abort
abort: () => Promise<void>;
Places this write stream into an aborted state (all future writes fail) and deletes all chunks that have already been written.
class HostAddress
class HostAddress {}
Modifiers
@public
constructor
constructor(hostString: string);
property host
host: string;
property isIPv6
isIPv6: boolean;
property port
port: number;
property socketPath
socketPath: string;
method fromHostPort
static fromHostPort: (host: string, port: number) => HostAddress;
method fromSrvRecord
static fromSrvRecord: ({ name, port }: SrvRecord) => HostAddress;
method fromString
static fromString: (this: void, s: string) => HostAddress;
method inspect
inspect: () => string;
method toHostPort
toHostPort: () => { host: string; port: number };
method toString
toString: () => string;
class ListCollectionsCursor
class ListCollectionsCursor< T extends Pick<CollectionInfo, 'name' | 'type'> | CollectionInfo = | Pick<CollectionInfo, 'name' | 'type'> | CollectionInfo> extends AbstractCursor<T> {}
Modifiers
@public
constructor
constructor(db: Db, filter: Document, options?: ListCollectionsOptions);
property filter
filter: Document;
property options
options?: ListCollectionsOptions;
property parent
parent: Db;
method clone
clone: () => ListCollectionsCursor<T>;
class ListIndexesCursor
class ListIndexesCursor extends AbstractCursor {}
Modifiers
@public
constructor
constructor(collection: Collection<Document>, options?: AbstractCursorOptions);
property options
options?: AbstractCursorOptions;
property parent
parent: Collection<Document>;
method clone
clone: () => ListIndexesCursor;
class ListSearchIndexesCursor
class ListSearchIndexesCursor extends AggregationCursor<{ name: string;}> {}
Modifiers
@public
class MongoAPIError
class MongoAPIError extends MongoDriverError {}
An error generated when the driver API is used incorrectly
Modifiers
@public
constructor
constructor(message: string, options?: { cause?: Error });
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoAWSError
class MongoAWSError extends MongoRuntimeError {}
A error generated when the user attempts to authenticate via AWS, but fails
Error
Modifiers
@public
constructor
constructor(message: string, options?: { cause?: Error });
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoAzureError
class MongoAzureError extends MongoOIDCError {}
A error generated when the user attempts to authenticate via Azure, but fails.
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoBatchReExecutionError
class MongoBatchReExecutionError extends MongoAPIError {}
An error generated when a batch command is re-executed after one of the commands in the batch has failed
Error
Modifiers
@public
constructor
constructor(message?: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoBulkWriteError
class MongoBulkWriteError extends MongoServerError {}
An error indicating an unsuccessful Bulk Write
Error
Modifiers
@public
constructor
constructor( error: | WriteConcernError | { message: string; code: number; writeErrors?: WriteError[] } | AnyError, result: BulkWriteResult);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property deletedCount
readonly deletedCount: number;
Number of documents deleted.
property err
err?: WriteConcernError;
property insertedCount
readonly insertedCount: number;
Number of documents inserted.
property insertedIds
readonly insertedIds: { [key: number]: any };
Inserted document generated Id's, hash key is the index of the originating operation
property matchedCount
readonly matchedCount: number;
Number of documents matched for update.
property modifiedCount
readonly modifiedCount: number;
Number of documents modified.
property name
readonly name: string;
property result
result: BulkWriteResult;
property upsertedCount
readonly upsertedCount: number;
Number of documents upserted.
property upsertedIds
readonly upsertedIds: { [key: number]: any };
Upserted document generated Id's, hash key is the index of the originating operation
property writeErrors
writeErrors: OneOrMore<WriteError>;
class MongoChangeStreamError
class MongoChangeStreamError extends MongoRuntimeError {}
An error generated when a ChangeStream operation fails to execute.
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoClient
class MongoClient extends TypedEventEmitter<MongoClientEvents> implements AsyncDisposable_2 {}
The **MongoClient** class is a class that allows for making Connections to MongoDB.
Remarks
The programmatically provided options take precedence over the URI options.
Example 1
import { MongoClient } from 'mongodb';// Enable command monitoring for debuggingconst client = new MongoClient('mongodb://localhost:27017', { monitorCommands: true });client.on('commandStarted', started => console.log(started));client.db().collection('pets');await client.insertOne({ name: 'spot', kind: 'dog' });Modifiers
@public
constructor
constructor(url: string, options?: MongoClientOptions);
property bsonOptions
readonly bsonOptions: BSONSerializeOptions;
property options
readonly options: Readonly<MongoOptions>;
See Also
MongoOptions
property readConcern
readonly readConcern: ReadConcern;
property readPreference
readonly readPreference: ReadPreference;
property serverApi
readonly serverApi: Readonly<ServerApi>;
property timeoutMS
readonly timeoutMS: number;
property writeConcern
readonly writeConcern: WriteConcern;
method bulkWrite
bulkWrite: < SchemaMap extends Record<string, Document> = Record<string, Document>>( models: ReadonlyArray<ClientBulkWriteModel<SchemaMap>>, options?: ClientBulkWriteOptions) => Promise<ClientBulkWriteResult>;
Executes a client bulk write operation, available on server 8.0+.
Parameter models
The client bulk write models.
Parameter options
The client bulk write options.
Returns
A ClientBulkWriteResult for acknowledged writes and ok: 1 for unacknowledged writes.
method close
close: (force?: boolean) => Promise<void>;
Cleans up client-side resources used by the MongoCLient and . This includes:
- Closes all open, unused connections (see note). - Ends all in-use sessions with ClientSession.endSession(). - Ends all unused sessions server-side. - Cleans up any resources being used for auto encryption if auto encryption is enabled.
Parameter force
Force close, emitting no events
Remarks
Any in-progress operations are not killed and any connections used by in progress operations will be cleaned up lazily as operations finish.
method connect
static connect: ( url: string, options?: MongoClientOptions) => Promise<MongoClient>;
Connect to MongoDB using a url
Remarks
Calling
connect
is optional since the first operation you perform will callconnect
if it's needed.timeoutMS
will bound the time any operation can take before throwing a timeout error. However, when the operation being run is automatically connecting yourMongoClient
thetimeoutMS
will not apply to the time taken to connect the MongoClient. This means the time to setup theMongoClient
does not count againsttimeoutMS
. If you are usingtimeoutMS
we recommend connecting your client explicitly in advance of any operation to avoid this inconsistent execution time.See Also
docs.mongodb.org/manual/reference/connection-string/
Connect to MongoDB using a url
Remarks
The programmatically provided options take precedence over the URI options.
See Also
https://www.mongodb.com/docs/manual/reference/connection-string/
method db
db: (dbName?: string, options?: DbOptions) => Db;
Create a new Db instance sharing the current socket connections.
Parameter dbName
The name of the database we want to use. If not provided, use database name from connection string.
Parameter options
Optional settings for Db construction
method startSession
startSession: (options?: ClientSessionOptions) => ClientSession;
Creates a new ClientSession. When using the returned session in an operation a corresponding ServerSession will be created.
Remarks
A ClientSession instance may only be passed to operations being performed on the same MongoClient it was started from.
method watch
watch: < TSchema extends Document = Document, TChange extends Document = ChangeStreamDocument<TSchema>>( pipeline?: Document[], options?: ChangeStreamOptions) => ChangeStream<TSchema, TChange>;
Create a new Change Stream, watching for new changes (insertions, updates, replacements, deletions, and invalidations) in this cluster. Will ignore all changes to system collections, as well as the local, admin, and config databases.
Parameter pipeline
An array of aggregation pipeline stages through which to pass change stream documents. This allows for filtering (using $match) and manipulating the change stream documents.
Parameter options
Optional settings for the command
Remarks
When
timeoutMS
is configured for a change stream, it will have different behaviour depending on whether the change stream is in iterator mode or emitter mode. In both cases, a change stream will time out if it does not receive a change event withintimeoutMS
of the last change event.Note that if a change stream is consistently timing out when watching a collection, database or client that is being changed, then this may be due to the server timing out before it can finish processing the existing oplog. To address this, restart the change stream with a higher
timeoutMS
.If the change stream times out the initial aggregate operation to establish the change stream on the server, then the client will close the change stream. If the getMore calls to the server time out, then the change stream will be left open, but will throw a MongoOperationTimeoutError when in iterator mode and emit an error event that returns a MongoOperationTimeoutError in emitter mode.
To determine whether or not the change stream is still open following a timeout, check the ChangeStream.closed getter.
Example 1
In iterator mode, if a next() call throws a timeout error, it will attempt to resume the change stream. The next call can just be retried after this succeeds.
const changeStream = collection.watch([], { timeoutMS: 100 });try {await changeStream.next();} catch (e) {if (e instanceof MongoOperationTimeoutError && !changeStream.closed) {await changeStream.next();}throw e;}Example 2
In emitter mode, if the change stream goes
timeoutMS
without emitting a change event, it will emit an error event that returns a MongoOperationTimeoutError, but will not close the change stream unless the resume attempt fails. There is no need to re-establish change listeners as this will automatically continue emitting change events once the resume attempt completes.const changeStream = collection.watch([], { timeoutMS: 100 });changeStream.on('change', console.log);changeStream.on('error', e => {if (e instanceof MongoOperationTimeoutError && !changeStream.closed) {// do nothing} else {changeStream.close();}});
method withSession
withSession: { <T = any>(executor: WithSessionCallback<T>): Promise<T>; <T = any>( options: ClientSessionOptions, executor: WithSessionCallback<T> ): Promise<T>;};
A convenience method for creating and handling the clean up of a ClientSession. The session will always be ended when the executor finishes.
Parameter executor
An executor function that all operations using the provided session must be invoked in
Parameter options
optional settings for the session
class MongoClientBulkWriteCursorError
class MongoClientBulkWriteCursorError extends MongoRuntimeError {}
An error indicating that an error occurred when processing bulk write results.
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoClientBulkWriteError
class MongoClientBulkWriteError extends MongoServerError {}
An error indicating that an error occurred when executing the bulk write.
Error
Modifiers
@public
constructor
constructor(message: ErrorDescription);
Initialize the client bulk write error.
Parameter message
The error message.
property name
readonly name: string;
property partialResult
partialResult?: ClientBulkWriteResult;
The results of any successful operations that were performed before the error was encountered.
property writeConcernErrors
writeConcernErrors: Document[];
Write concern errors that occurred while executing the bulk write. This list may have multiple items if more than one server command was required to execute the bulk write.
property writeErrors
writeErrors: Map<number, ClientBulkWriteError>;
Errors that occurred during the execution of individual write operations. This map will contain at most one entry if the bulk write was ordered.
class MongoClientBulkWriteExecutionError
class MongoClientBulkWriteExecutionError extends MongoRuntimeError {}
An error indicating that an error occurred on the client when executing a client bulk write.
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoCompatibilityError
class MongoCompatibilityError extends MongoAPIError {}
An error generated when a feature that is not enabled or allowed for the current server configuration is used
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoCredentials
class MongoCredentials {}
A representation of the credentials used by MongoDB
Modifiers
@public
constructor
constructor(options: MongoCredentialsOptions);
property mechanism
readonly mechanism: AuthMechanism;
The method used to authenticate
property mechanismProperties
readonly mechanismProperties: AuthMechanismProperties;
Special properties used by some types of auth mechanisms
property password
readonly password: string;
The password used for authentication
property source
readonly source: string;
The database that the user should authenticate against
property username
readonly username: string;
The username used for authentication
method equals
equals: (other: MongoCredentials) => boolean;
Determines if two MongoCredentials objects are equivalent
method merge
static merge: ( creds: MongoCredentials | undefined, options: Partial<MongoCredentialsOptions>) => MongoCredentials;
method resolveAuthMechanism
resolveAuthMechanism: (hello: Document | null) => MongoCredentials;
If the authentication mechanism is set to "default", resolves the authMechanism based on the server version and server supported sasl mechanisms.
Parameter hello
A hello response from the server
method validate
validate: () => void;
class MongoCryptAzureKMSRequestError
class MongoCryptAzureKMSRequestError extends MongoCryptError {}
An error indicating that mongodb-client-encryption failed to auto-refresh Azure KMS credentials.
Modifiers
@public
constructor
constructor(message: string, body?: Document);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property body
body?: Document;
The body of the http response that failed, if present.
property name
readonly name: string;
class MongoCryptCreateDataKeyError
class MongoCryptCreateDataKeyError extends MongoCryptError {}
An error indicating that
ClientEncryption.createEncryptedCollection()
failed to create data keysModifiers
@public
constructor
constructor(encryptedFields: Document, { cause }: { cause: Error });
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property encryptedFields
encryptedFields: Document;
property name
readonly name: string;
class MongoCryptCreateEncryptedCollectionError
class MongoCryptCreateEncryptedCollectionError extends MongoCryptError {}
An error indicating that
ClientEncryption.createEncryptedCollection()
failed to create a collectionModifiers
@public
constructor
constructor(encryptedFields: Document, { cause }: { cause: Error });
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property encryptedFields
encryptedFields: Document;
property name
readonly name: string;
class MongoCryptError
class MongoCryptError extends MongoError {}
An error indicating that something went wrong specifically with MongoDB Client Encryption
Modifiers
@public
constructor
constructor(message: string, options?: { cause?: Error });
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoCryptInvalidArgumentError
class MongoCryptInvalidArgumentError extends MongoCryptError {}
An error indicating an invalid argument was provided to an encryption API.
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoCryptKMSRequestNetworkTimeoutError
class MongoCryptKMSRequestNetworkTimeoutError extends MongoCryptError {}
Modifiers
@public
property name
readonly name: string;
class MongoCursorExhaustedError
class MongoCursorExhaustedError extends MongoAPIError {}
An error thrown when an attempt is made to read from a cursor that has been exhausted
Error
Modifiers
@public
constructor
constructor(message?: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoCursorInUseError
class MongoCursorInUseError extends MongoAPIError {}
An error thrown when the user attempts to add options to a cursor that has already been initialized
Error
Modifiers
@public
constructor
constructor(message?: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoDBCollectionNamespace
class MongoDBCollectionNamespace extends MongoDBNamespace {}
A class representing a collection's namespace. This class enforces (through Typescript) that the
collection
portion of the namespace is defined and should only be used in scenarios where this can be guaranteed.Modifiers
@public
constructor
constructor(db: string, collection: string);
property collection
collection: string;
method fromString
static fromString: (namespace?: string) => MongoDBCollectionNamespace;
class MongoDBNamespace
class MongoDBNamespace {}
Modifiers
@public
constructor
constructor(db: string, collection?: string);
Create a namespace object
Parameter db
database name
Parameter collection
collection name
property collection
collection?: string;
property db
db: string;
method fromString
static fromString: (namespace?: string) => MongoDBNamespace;
method toString
toString: () => string;
method withCollection
withCollection: (collection: string) => MongoDBCollectionNamespace;
class MongoDecompressionError
class MongoDecompressionError extends MongoRuntimeError {}
An error generated when the driver fails to decompress data received from the server.
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoDriverError
class MongoDriverError extends MongoError {}
An error generated by the driver
Error
Modifiers
@public
constructor
constructor(message: string, options?: { cause?: Error });
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoError
class MongoError extends Error {}
Error
Modifiers
@public
constructor
constructor(message: string, options?: { cause?: Error });
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property cause
cause?: Error;
property code
code?: string | number;
This is a number in MongoServerError and a string in MongoDriverError
property connectionGeneration
connectionGeneration?: number;
property errmsg
readonly errmsg: string;
Legacy name for server error responses
property errorLabels
readonly errorLabels: string[];
property name
readonly name: string;
property topologyVersion
topologyVersion?: TopologyVersion;
method addErrorLabel
addErrorLabel: (label: string) => void;
method hasErrorLabel
hasErrorLabel: (label: string) => boolean;
Checks the error to see if it has an error label
Parameter label
The error label to check for
Returns
returns true if the error has the provided error label
class MongoExpiredSessionError
class MongoExpiredSessionError extends MongoAPIError {}
An error generated when the user attempts to operate on a session that has expired or has been closed.
Error
Modifiers
@public
constructor
constructor(message?: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoGCPError
class MongoGCPError extends MongoOIDCError {}
A error generated when the user attempts to authenticate via GCP, but fails.
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoGridFSChunkError
class MongoGridFSChunkError extends MongoRuntimeError {}
An error generated when a malformed or invalid chunk is encountered when reading from a GridFSStream.
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoGridFSStreamError
class MongoGridFSStreamError extends MongoRuntimeError {}
An error generated when a GridFSStream operation fails to execute.
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoInvalidArgumentError
class MongoInvalidArgumentError extends MongoAPIError {}
An error generated when the user supplies malformed or unexpected arguments or when a required argument or field is not provided.
Error
Modifiers
@public
constructor
constructor(message: string, options?: { cause?: Error });
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoKerberosError
class MongoKerberosError extends MongoRuntimeError {}
A error generated when the user attempts to authenticate via Kerberos, but fails to connect to the Kerberos client.
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoMissingCredentialsError
class MongoMissingCredentialsError extends MongoAPIError {}
An error generated when the user fails to provide authentication credentials before attempting to connect to a mongo server instance.
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoMissingDependencyError
class MongoMissingDependencyError extends MongoAPIError {}
An error generated when a required module or dependency is not present in the local environment
Error
Modifiers
@public
constructor
constructor(message: string, options: { cause: Error; dependencyName: string });
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property cause
cause: Error;
Remarks
This property is assigned in the
Error
constructor.
property dependencyName
dependencyName: string;
property name
readonly name: string;
class MongoNetworkError
class MongoNetworkError extends MongoError {}
An error indicating an issue with the network, including TCP errors and timeouts.
Error
Modifiers
@public
constructor
constructor(message: string, options?: MongoNetworkErrorOptions);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoNetworkTimeoutError
class MongoNetworkTimeoutError extends MongoNetworkError {}
An error indicating a network timeout occurred
Error
Modifiers
@public
constructor
constructor(message: string, options?: MongoNetworkErrorOptions);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoNotConnectedError
class MongoNotConnectedError extends MongoAPIError {}
An error thrown when the user attempts to operate on a database or collection through a MongoClient that has not yet successfully called the "connect" method
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoOIDCError
class MongoOIDCError extends MongoRuntimeError {}
A error generated when the user attempts to authenticate via OIDC callbacks, but fails.
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoOperationTimeoutError
class MongoOperationTimeoutError extends MongoDriverError {}
Error
The
MongoOperationTimeoutError
class represents an error that occurs when an operation could not be completed within the specifiedtimeoutMS
. It is generated by the driver in support of the "client side operation timeout" feature so inherits fromMongoDriverError
. WhentimeoutMS
is enabledMongoServerError
s relating toMaxTimeExpired
errors will be converted toMongoOperationTimeoutError
Example 1
try {await blogs.insertOne(blogPost, { timeoutMS: 60_000 })} catch (error) {if (error instanceof MongoOperationTimeoutError) {console.log(`Oh no! writer's block!`, error);}}Modifiers
@public
property name
readonly name: string;
class MongoParseError
class MongoParseError extends MongoDriverError {}
An error used when attempting to parse a value (like a connection string)
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoRuntimeError
class MongoRuntimeError extends MongoDriverError {}
An error generated when the driver encounters unexpected input or reaches an unexpected/invalid internal state.
Modifiers
@public
constructor
constructor(message: string, options?: { cause?: Error });
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoServerClosedError
class MongoServerClosedError extends MongoAPIError {}
An error generated when an attempt is made to operate on a closed/closing server.
Error
Modifiers
@public
constructor
constructor(message?: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoServerError
class MongoServerError extends MongoError {}
An error coming from the mongo server
Error
Modifiers
@public
constructor
constructor(message: ErrorDescription);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property codeName
codeName?: string;
property errInfo
errInfo?: Document;
property errorResponse
errorResponse: ErrorDescription;
Raw error result document returned by server.
property name
readonly name: string;
property ok
ok?: number;
property writeConcernError
writeConcernError?: Document;
class MongoServerSelectionError
class MongoServerSelectionError extends MongoSystemError {}
An error signifying a client-side server selection error
Error
Modifiers
@public
constructor
constructor(message: string, reason: TopologyDescription);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoStalePrimaryError
class MongoStalePrimaryError extends MongoRuntimeError {}
An error generated when a primary server is marked stale, never directly thrown
Error
Modifiers
@public
constructor
constructor( serverDescription: ServerDescription, maxSetVersion: number, maxElectionId: any, options?: { cause?: Error });
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoSystemError
class MongoSystemError extends MongoError {}
An error signifying a general system issue
Error
Modifiers
@public
constructor
constructor(message: string, reason: TopologyDescription);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
property reason
reason?: TopologyDescription;
An optional reason context, such as an error saved during flow of monitoring and selecting servers
class MongoTailableCursorError
class MongoTailableCursorError extends MongoAPIError {}
An error thrown when the user calls a function or method not supported on a tailable cursor
Error
Modifiers
@public
constructor
constructor(message?: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoTopologyClosedError
class MongoTopologyClosedError extends MongoAPIError {}
An error generated when an attempt is made to operate on a dropped, or otherwise unavailable, database.
Error
Modifiers
@public
constructor
constructor(message?: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoTransactionError
class MongoTransactionError extends MongoAPIError {}
An error generated when the user makes a mistake in the usage of transactions. (e.g. attempting to commit a transaction with a readPreference other than primary)
Error
Modifiers
@public
constructor
constructor(message: string);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoUnexpectedServerResponseError
class MongoUnexpectedServerResponseError extends MongoRuntimeError {}
An error generated when a **parsable** unexpected response comes from the server. This is generally an error where the driver in a state expecting a certain behavior to occur in the next message from MongoDB but it receives something else. This error **does not** represent an issue with wire message formatting.
#### Example When an operation fails, it is the driver's job to retry it. It must perform serverSelection again to make sure that it attempts the operation against a server in a good state. If server selection returns a server that does not support retryable operations, this error is used. This scenario is unlikely as retryable support would also have been determined on the first attempt but it is possible the state change could report a selectable server that does not support retries.
Error
Modifiers
@public
constructor
constructor(message: string, options?: { cause?: Error });
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
class MongoWriteConcernError
class MongoWriteConcernError extends MongoServerError {}
An error thrown when the server reports a writeConcernError
Error
Modifiers
@public
constructor
constructor(result: WriteConcernErrorResult);
**Do not use this constructor!**
Meant for internal use only.
Remarks
This class is only meant to be constructed within the driver. This constructor is not subject to semantic versioning compatibility guarantees and may change at any time.
Modifiers
@public
property name
readonly name: string;
property result
result: Document;
The result document
class OrderedBulkOperation
class OrderedBulkOperation extends BulkOperationBase {}
Modifiers
@public
method addToOperationsList
addToOperationsList: ( batchType: BatchType, document: Document | UpdateStatement | DeleteStatement) => this;
class ReadConcern
class ReadConcern {}
The MongoDB ReadConcern, which allows for control of the consistency and isolation properties of the data read from replica sets and replica set shards.
See Also
https://www.mongodb.com/docs/manual/reference/read-concern/index.html
Modifiers
@public
constructor
constructor(level: ReadConcernLevel);
Constructs a ReadConcern from the read concern level.
property AVAILABLE
static readonly AVAILABLE: string;
property level
level: string;
property LINEARIZABLE
static readonly LINEARIZABLE: string;
property MAJORITY
static readonly MAJORITY: string;
property SNAPSHOT
static readonly SNAPSHOT: string;
method fromOptions
static fromOptions: (options?: { readConcern?: ReadConcernLike; level?: ReadConcernLevel;}) => ReadConcern | undefined;
Construct a ReadConcern given an options object.
Parameter options
The options object from which to extract the write concern.
method toJSON
toJSON: () => Document;
class ReadPreference
class ReadPreference {}
The **ReadPreference** class is a class that represents a MongoDB ReadPreference and is used to construct connections.
See Also
https://www.mongodb.com/docs/manual/core/read-preference/
Modifiers
@public
constructor
constructor( mode: ReadPreferenceMode, tags?: TagSet[], options?: ReadPreferenceOptions);
Parameter mode
A string describing the read preference mode (primary|primaryPreferred|secondary|secondaryPreferred|nearest)
Parameter tags
A tag set used to target reads to members with the specified tag(s). tagSet is not available if using read preference mode primary.
Parameter options
Additional read preference options
property hedge
hedge?: HedgeOptions;
property maxStalenessSeconds
maxStalenessSeconds?: number;
property minWireVersion
minWireVersion?: number;
property mode
mode: ReadPreferenceMode;
property nearest
static nearest: ReadPreference;
property NEAREST
static NEAREST: string;
property preference
readonly preference: ReadPreferenceMode;
property primary
static primary: ReadPreference;
property PRIMARY
static PRIMARY: string;
property PRIMARY_PREFERRED
static PRIMARY_PREFERRED: string;
property primaryPreferred
static primaryPreferred: ReadPreference;
property secondary
static secondary: ReadPreference;
property SECONDARY
static SECONDARY: string;
property SECONDARY_PREFERRED
static SECONDARY_PREFERRED: string;
property secondaryPreferred
static secondaryPreferred: ReadPreference;
property tags
tags?: TagSet[];
method equals
equals: (readPreference: ReadPreference) => boolean;
Check if the two ReadPreferences are equivalent
Parameter readPreference
The read preference with which to check equality
method fromOptions
static fromOptions: ( options?: ReadPreferenceFromOptions) => ReadPreference | undefined;
Construct a ReadPreference given an options object.
Parameter options
The options object from which to extract the read preference.
method fromString
static fromString: (mode: string) => ReadPreference;
method isValid
static isValid: (mode: string) => boolean;
Validate if a mode is legal
Parameter mode
The string representing the read preference mode.
method secondaryOk
secondaryOk: () => boolean;
Indicates that this readPreference needs the "SecondaryOk" bit when sent over the wire
See Also
https://www.mongodb.com/docs/manual/reference/mongodb-wire-protocol/#op-query
method toJSON
toJSON: () => Document;
Return JSON representation
method translate
static translate: ( options: ReadPreferenceLikeOptions) => ReadPreferenceLikeOptions;
Replaces options.readPreference with a ReadPreference instance
class RunCommandCursor
class RunCommandCursor extends AbstractCursor {}
Modifiers
@public
property command
readonly command: Readonly<Record<string, any>>;
property getMoreOptions
readonly getMoreOptions: { comment?: any; maxAwaitTimeMS?: number; batchSize?: number;};
method addCursorFlag
addCursorFlag: (_: string, __: boolean) => never;
Unsupported for RunCommandCursor: various cursor flags must be configured directly on command document
method batchSize
batchSize: (_: number) => never;
Unsupported for RunCommandCursor: batchSize must be configured directly on command document
method clone
clone: () => never;
Unsupported for RunCommandCursor
method maxTimeMS
maxTimeMS: (_: number) => never;
Unsupported for RunCommandCursor: maxTimeMS must be configured directly on command document
method setBatchSize
setBatchSize: (batchSize: number) => this;
Controls the
getMore.batchSize
fieldParameter batchSize
the number documents to return in the
nextBatch
method setComment
setComment: (comment: any) => this;
Controls the
getMore.comment
fieldParameter comment
any BSON value
method setMaxTimeMS
setMaxTimeMS: (maxTimeMS: number) => this;
Controls the
getMore.maxTimeMS
field. Only valid when cursor is tailable awaitParameter maxTimeMS
the number of milliseconds to wait for new data
method withReadConcern
withReadConcern: (_: ReadConcernLike) => never;
Unsupported for RunCommandCursor: readConcern must be configured directly on command document
class ServerCapabilities
class ServerCapabilities {}
Modifiers
@public
constructor
constructor(hello: Document);
property commandsTakeCollation
readonly commandsTakeCollation: boolean;
property commandsTakeWriteConcern
readonly commandsTakeWriteConcern: boolean;
property hasAggregationCursor
readonly hasAggregationCursor: boolean;
property hasAuthCommands
readonly hasAuthCommands: boolean;
property hasListCollectionsCommand
readonly hasListCollectionsCommand: boolean;
property hasListIndexesCommand
readonly hasListIndexesCommand: boolean;
property hasTextSearch
readonly hasTextSearch: boolean;
property hasWriteCommands
readonly hasWriteCommands: boolean;
property maxWireVersion
maxWireVersion: number;
property minWireVersion
minWireVersion: number;
property supportsSnapshotReads
readonly supportsSnapshotReads: boolean;
class ServerClosedEvent
class ServerClosedEvent {}
Emitted when server is closed.
Event
Modifiers
@public
property address
address: string;
The address (host/port pair) of the server
property topologyId
topologyId: number;
A unique identifier for the topology
class ServerDescription
class ServerDescription {}
The client's view of a single server, based on the most recent hello outcome.
Internal type, not meant to be directly instantiated
Modifiers
@public
property $clusterTime
$clusterTime?: ClusterTime;
property address
address: string;
property allHosts
readonly allHosts: string[];
property arbiters
arbiters: string[];
property electionId
electionId: any;
property error
error: MongoError;
property host
readonly host: string;
property hostAddress
readonly hostAddress: HostAddress;
property hosts
hosts: string[];
property iscryptd
iscryptd: boolean;
Indicates server is a mongocryptd instance.
property isDataBearing
readonly isDataBearing: boolean;
Is this server data bearing
property isReadable
readonly isReadable: boolean;
Is this server available for reads
property isWritable
readonly isWritable: boolean;
Is this server available for writes
property lastUpdateTime
lastUpdateTime: number;
property lastWriteDate
lastWriteDate: number;
property logicalSessionTimeoutMinutes
logicalSessionTimeoutMinutes: number;
property maxBsonObjectSize
maxBsonObjectSize: number;
The max bson object size.
property maxMessageSizeBytes
maxMessageSizeBytes: number;
The max message size in bytes for the server.
property maxWireVersion
maxWireVersion: number;
property maxWriteBatchSize
maxWriteBatchSize: number;
The max number of writes in a bulk write command.
property me
me: string;
property minRoundTripTime
minRoundTripTime: number;
The minimum measurement of the last 10 measurements of roundTripTime that have been collected
property minWireVersion
minWireVersion: number;
property passives
passives: string[];
property port
readonly port: number;
property primary
primary: string;
property roundTripTime
roundTripTime: number;
property setName
setName: string;
property setVersion
setVersion: number;
property tags
tags: TagSet;
property topologyVersion
topologyVersion: TopologyVersion;
property type
type: ServerType;
method equals
equals: (other?: ServerDescription | null) => boolean;
Determines if another
ServerDescription
is equal to this one per the rules defined in the SDAM specification.See Also
https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.md
class ServerDescriptionChangedEvent
class ServerDescriptionChangedEvent {}
Emitted when server description changes, but does NOT include changes to the RTT.
Event
Modifiers
@public
property address
address: string;
The address (host/port pair) of the server
property name
name: string;
property newDescription
newDescription: ServerDescription;
The new server description
property previousDescription
previousDescription: ServerDescription;
The previous server description
property topologyId
topologyId: number;
A unique identifier for the topology
class ServerHeartbeatFailedEvent
class ServerHeartbeatFailedEvent {}
Emitted when the server monitor’s hello fails, either with an “ok: 0” or a socket exception.
Event
Modifiers
@public
property awaited
awaited: boolean;
Is true when using the streaming protocol
property connectionId
connectionId: string;
The connection id for the command
property duration
duration: number;
The execution time of the event in ms
property failure
failure: Error;
The command failure
class ServerHeartbeatStartedEvent
class ServerHeartbeatStartedEvent {}
Emitted when the server monitor’s hello command is started - immediately before the hello command is serialized into raw BSON and written to the socket.
Event
Modifiers
@public
property awaited
awaited: boolean;
Is true when using the streaming protocol
property connectionId
connectionId: string;
The connection id for the command
class ServerHeartbeatSucceededEvent
class ServerHeartbeatSucceededEvent {}
Emitted when the server monitor’s hello succeeds.
Event
Modifiers
@public
property awaited
awaited: boolean;
Is true when using the streaming protocol
property connectionId
connectionId: string;
The connection id for the command
property duration
duration: number;
The execution time of the event in ms
property reply
reply: Document;
The command reply
class ServerOpeningEvent
class ServerOpeningEvent {}
Emitted when server is initialized.
Event
Modifiers
@public
property address
address: string;
The address (host/port pair) of the server
property topologyId
topologyId: number;
A unique identifier for the topology
class ServerSession
class ServerSession {}
Reflects the existence of a session on the server. Can be reused by the session pool. WARNING: not meant to be instantiated directly. For internal use only.
Modifiers
@public
property id
id: ServerSessionId;
property isDirty
isDirty: boolean;
property lastUse
lastUse: number;
property txnNumber
txnNumber: number;
method hasTimedOut
hasTimedOut: (sessionTimeoutMinutes: number) => boolean;
Determines if the server session has timed out.
Parameter sessionTimeoutMinutes
The server's "logicalSessionTimeoutMinutes"
class StreamDescription
class StreamDescription {}
Modifiers
@public
constructor
constructor(address: string, options?: StreamDescriptionOptions);
property address
address: string;
property compressor
compressor?: 'none' | 'snappy' | 'zlib' | 'zstd';
property compressors
compressors: ('none' | 'snappy' | 'zlib' | 'zstd')[];
property hello
hello: any;
property loadBalanced
loadBalanced: boolean;
property logicalSessionTimeoutMinutes
logicalSessionTimeoutMinutes?: number;
property maxBsonObjectSize
maxBsonObjectSize: number;
property maxMessageSizeBytes
maxMessageSizeBytes: number;
property maxWireVersion
maxWireVersion?: number;
property maxWriteBatchSize
maxWriteBatchSize: number;
property minWireVersion
minWireVersion?: number;
property serverConnectionId
serverConnectionId: BigInt;
property type
type: ServerType;
property zlibCompressionLevel
zlibCompressionLevel?: number;
method parseServerConnectionID
parseServerConnectionID: ( serverConnectionId: number | Double | bigint | Long) => bigint;
method receiveResponse
receiveResponse: (response: Document | null) => void;
class TopologyClosedEvent
class TopologyClosedEvent {}
Emitted when topology is closed.
Event
Modifiers
@public
property topologyId
topologyId: number;
A unique identifier for the topology
class TopologyDescription
class TopologyDescription {}
Representation of a deployment of servers
Modifiers
@public
constructor
constructor( topologyType: TopologyType, serverDescriptions?: Map<string, ServerDescription>, setName?: string, maxSetVersion?: number, maxElectionId?: any, commonWireVersion?: number, options?: TopologyDescriptionOptions);
Create a TopologyDescription
property commonWireVersion
commonWireVersion: number;
property compatibilityError
compatibilityError?: string;
property compatible
compatible: boolean;
property error
readonly error: MongoError;
property hasDataBearingServers
readonly hasDataBearingServers: boolean;
Determines if this topology description has a data-bearing server available.
property hasKnownServers
readonly hasKnownServers: boolean;
Determines if the topology description has any known servers
property heartbeatFrequencyMS
heartbeatFrequencyMS: number;
property localThresholdMS
localThresholdMS: number;
property logicalSessionTimeoutMinutes
logicalSessionTimeoutMinutes: number;
property maxElectionId
maxElectionId: any;
property maxSetVersion
maxSetVersion: number;
property servers
servers: Map<string, ServerDescription>;
property setName
setName: string;
property stale
stale: boolean;
property type
type: TopologyType;
method toJSON
toJSON: () => Document;
Returns a JSON-serializable representation of the TopologyDescription. This is primarily intended for use with JSON.stringify().
This method will not throw.
class TopologyDescriptionChangedEvent
class TopologyDescriptionChangedEvent {}
Emitted when topology description changes.
Event
Modifiers
@public
property newDescription
newDescription: TopologyDescription;
The new topology description
property previousDescription
previousDescription: TopologyDescription;
The old topology description
property topologyId
topologyId: number;
A unique identifier for the topology
class TopologyOpeningEvent
class TopologyOpeningEvent {}
Emitted when topology is initialized.
Event
Modifiers
@public
property topologyId
topologyId: number;
A unique identifier for the topology
class Transaction
class Transaction {}
A class maintaining state related to a server transaction. Internal Only
Modifiers
@public
property isActive
readonly isActive: boolean;
Returns
Whether this session is presently in a transaction
property isCommitted
readonly isCommitted: boolean;
property isPinned
readonly isPinned: boolean;
property isStarting
readonly isStarting: boolean;
Returns
Whether the transaction has started
property options
options: TransactionOptions;
property recoveryToken
readonly recoveryToken: any;
class TypedEventEmitter
class TypedEventEmitter<Events extends EventsDescription> extends EventEmitter {}
Typescript type safe event emitter
Modifiers
@public
class UnorderedBulkOperation
class UnorderedBulkOperation extends BulkOperationBase {}
Modifiers
@public
method addToOperationsList
addToOperationsList: ( batchType: BatchType, document: Document | UpdateStatement | DeleteStatement) => this;
method handleWriteError
handleWriteError: (writeResult: BulkWriteResult) => void;
class WriteConcern
class WriteConcern {}
A MongoDB WriteConcern, which describes the level of acknowledgement requested from MongoDB for write operations.
See Also
https://www.mongodb.com/docs/manual/reference/write-concern/
Modifiers
@public
constructor
constructor(w?: W, wtimeoutMS?: number, journal?: boolean, fsync?: boolean | 1);
Constructs a WriteConcern from the write concern properties.
Parameter w
request acknowledgment that the write operation has propagated to a specified number of mongod instances or to mongod instances with specified tags.
Parameter wtimeoutMS
specify a time limit to prevent write operations from blocking indefinitely
Parameter journal
request acknowledgment that the write operation has been written to the on-disk journal
Parameter fsync
equivalent to the j option. Is deprecated and will be removed in the next major version.
property fsync
fsync?: boolean | 1;
Equivalent to the j option.
Deprecated
Will be removed in the next major version. Please use journal.
property j
j?: boolean;
Request acknowledgment that the write operation has been written to the on-disk journal.
Deprecated
Will be removed in the next major version. Please use journal.
property journal
readonly journal?: boolean;
Request acknowledgment that the write operation has been written to the on-disk journal
property w
readonly w?: W;
Request acknowledgment that the write operation has propagated to a specified number of mongod instances or to mongod instances with specified tags. If w is 0 and is set on a write operation, the server will not send a response.
property wtimeout
wtimeout?: number;
Specify a time limit to prevent write operations from blocking indefinitely.
Deprecated
Will be removed in the next major version. Please use wtimeoutMS.
property wtimeoutMS
readonly wtimeoutMS?: number;
Specify a time limit to prevent write operations from blocking indefinitely.
method apply
static apply: (command: Document, writeConcern: WriteConcern) => Document;
Apply a write concern to a command document. Will modify and return the command.
method fromOptions
static fromOptions: ( options?: WriteConcernOptions | WriteConcern | W, inherit?: WriteConcernOptions | WriteConcern) => WriteConcern | undefined;
Construct a WriteConcern given an options object.
class WriteConcernError
class WriteConcernError {}
An error representing a failure by the server to apply the requested write concern to the bulk operation.
Error
Modifiers
@public
constructor
constructor(error: WriteConcernErrorData);
property code
readonly code: number;
Write concern error code.
property errInfo
readonly errInfo: any;
Write concern error info.
property errmsg
readonly errmsg: string;
Write concern error message.
method toJSON
toJSON: () => WriteConcernErrorData;
method toString
toString: () => string;
class WriteError
class WriteError {}
An error that occurred during a BulkWrite on the server.
Error
Modifiers
@public
constructor
constructor(err: BulkWriteOperationError);
property code
readonly code: number;
WriteError code.
property err
err: BulkWriteOperationError;
property errInfo
readonly errInfo: any;
WriteError details.
property errmsg
readonly errmsg: string;
WriteError message.
property index
readonly index: number;
WriteError original bulk operation index.
method getOperation
getOperation: () => Document;
Returns the underlying operation that caused the error
method toJSON
toJSON: () => { code: number; index: number; errmsg?: string; op: Document };
method toString
toString: () => string;
Interfaces
interface AbstractCursorOptions
interface AbstractCursorOptions extends BSONSerializeOptions {}
Modifiers
@public
property awaitData
awaitData?: boolean;
If awaitData is set to true, when the cursor reaches the end of the capped collection, MongoDB blocks the query thread for a period of time waiting for new data to arrive. When new data is inserted into the capped collection, the blocked thread is signaled to wake up and return the next batch to the client.
property batchSize
batchSize?: number;
Specifies the number of documents to return in each response from MongoDB
property comment
comment?: unknown;
Comment to apply to the operation.
In server versions pre-4.4, 'comment' must be string. A server error will be thrown if any other type is provided.
In server versions 4.4 and above, 'comment' can be any valid BSON type.
property maxAwaitTimeMS
maxAwaitTimeMS?: number;
When applicable
maxAwaitTimeMS
controls the amount of time subsequent getMores that a cursor uses to fetch more data should take. (ex. cursor.next())
property maxTimeMS
maxTimeMS?: number;
When applicable
maxTimeMS
controls the amount of time the initial command that constructs a cursor should take. (ex. find, aggregate, listCollections)
property noCursorTimeout
noCursorTimeout?: boolean;
property readConcern
readConcern?: ReadConcernLike;
property readPreference
readPreference?: ReadPreferenceLike;
property session
session?: ClientSession;
property tailable
tailable?: boolean;
By default, MongoDB will automatically close a cursor when the client has exhausted all results in the cursor. However, for [capped collections](https://www.mongodb.com/docs/manual/core/capped-collections) you may use a Tailable Cursor that remains open after the client exhausts the results in the initial cursor.
property timeoutMode
timeoutMode?: CursorTimeoutMode;
Specifies how
timeoutMS
is applied to the cursor. Can be either'cursorLifeTime'
or'iteration'
When set to'iteration'
, the deadline specified bytimeoutMS
applies to each call ofcursor.next()
. When set to'cursorLifetime'
, the deadline applies to the life of the entire cursor.Depending on the type of cursor being used, this option has different default values. For non-tailable cursors, this value defaults to
'cursorLifetime'
For tailable cursors, this value defaults to'iteration'
since tailable cursors, by definition can have an arbitrarily long lifetime.Example 1
const cursor = collection.find({}, {timeoutMS: 100, timeoutMode: 'iteration'});for await (const doc of cursor) {// process doc// This will throw a timeout error if any of the iterator's `next()` calls takes more than 100ms, but// will continue to iterate successfully otherwise, regardless of the number of batches.}Example 2
const cursor = collection.find({}, { timeoutMS: 1000, timeoutMode: 'cursorLifetime' });const docs = await cursor.toArray(); // This entire line will throw a timeout error if all batches are not fetched and returned within 1000ms.Modifiers
@public
@experimental
property timeoutMS
timeoutMS?: number;
Specifies the time an operation will run until it throws a timeout error. See AbstractCursorOptions.timeoutMode for more details on how this option applies to cursors.
interface AggregateOptions
interface AggregateOptions extends Omit<CommandOperationOptions, 'explain'> {}
Modifiers
@public
property allowDiskUse
allowDiskUse?: boolean;
allowDiskUse lets the server know if it can use disk to store temporary results for the aggregation (requires mongodb 2.6 >).
property batchSize
batchSize?: number;
The number of documents to return per batch. See [aggregation documentation](https://www.mongodb.com/docs/manual/reference/command/aggregate).
property bypassDocumentValidation
bypassDocumentValidation?: boolean;
Allow driver to bypass schema validation.
property collation
collation?: CollationOptions;
Specify collation.
property cursor
cursor?: Document;
Return the query as cursor, on 2.6 > it returns as a real cursor on pre 2.6 it returns as an emulated cursor.
property explain
explain?: ExplainOptions['explain'];
Specifies the verbosity mode for the explain output.
Deprecated
This API is deprecated in favor of
collection.aggregate().explain()
ordb.aggregate().explain()
.
property hint
hint?: Hint;
Add an index selection hint to an aggregation command
property let
let?: Document;
Map of parameter names and values that can be accessed using $$var (requires MongoDB 5.0).
property maxAwaitTimeMS
maxAwaitTimeMS?: number;
The maximum amount of time for the server to wait on new documents to satisfy a tailable cursor query.
property maxTimeMS
maxTimeMS?: number;
Specifies a cumulative time limit in milliseconds for processing operations on the cursor. MongoDB interrupts the operation at the earliest following interrupt point.
property out
out?: string;
interface AggregationCursorOptions
interface AggregationCursorOptions extends AbstractCursorOptions, AggregateOptions {}
Modifiers
@public
interface AsyncDisposable
interface AsyncDisposable_2 {}
Modifiers
@public
interface Auth
interface Auth {}
Modifiers
@public
interface AuthMechanismProperties
interface AuthMechanismProperties extends Document {}
Modifiers
@public
property ALLOWED_HOSTS
ALLOWED_HOSTS?: string[];
Allowed hosts that OIDC auth can connect to.
property AWS_SESSION_TOKEN
AWS_SESSION_TOKEN?: string;
property CANONICALIZE_HOST_NAME
CANONICALIZE_HOST_NAME?: GSSAPICanonicalizationValue;
property ENVIRONMENT
ENVIRONMENT?: 'test' | 'azure' | 'gcp' | 'k8s';
The OIDC environment. Note that 'test' is for internal use only.
property OIDC_CALLBACK
OIDC_CALLBACK?: OIDCCallbackFunction;
A user provided OIDC machine callback function.
property OIDC_HUMAN_CALLBACK
OIDC_HUMAN_CALLBACK?: OIDCCallbackFunction;
A user provided OIDC human interacted callback function.
property SERVICE_HOST
SERVICE_HOST?: string;
property SERVICE_NAME
SERVICE_NAME?: string;
property SERVICE_REALM
SERVICE_REALM?: string;
property TOKEN_RESOURCE
TOKEN_RESOURCE?: string;
The resource token for OIDC auth in Azure and GCP.
interface AutoEncryptionOptions
interface AutoEncryptionOptions {}
Modifiers
@public
property bypassAutoEncryption
bypassAutoEncryption?: boolean;
Allows the user to bypass auto encryption, maintaining implicit decryption
property bypassQueryAnalysis
bypassQueryAnalysis?: boolean;
Allows users to bypass query analysis
property encryptedFieldsMap
encryptedFieldsMap?: Document;
Supply a schema for the encrypted fields in the document
property extraOptions
extraOptions?: { /** * A local process the driver communicates with to determine how to encrypt values in a command. * Defaults to "mongodb://%2Fvar%2Fmongocryptd.sock" if domain sockets are available or "mongodb://localhost:27020" otherwise */ mongocryptdURI?: string; /** If true, autoEncryption will not attempt to spawn a mongocryptd before connecting */ mongocryptdBypassSpawn?: boolean; /** The path to the mongocryptd executable on the system */ mongocryptdSpawnPath?: string; /** Command line arguments to use when auto-spawning a mongocryptd */ mongocryptdSpawnArgs?: string[]; /** * Full path to a MongoDB Crypt shared library to be used (instead of mongocryptd). * * This needs to be the path to the file itself, not a directory. * It can be an absolute or relative path. If the path is relative and * its first component is `$ORIGIN`, it will be replaced by the directory * containing the mongodb-client-encryption native addon file. Otherwise, * the path will be interpreted relative to the current working directory. * * Currently, loading different MongoDB Crypt shared library files from different * MongoClients in the same process is not supported. * * If this option is provided and no MongoDB Crypt shared library could be loaded * from the specified location, creating the MongoClient will fail. * * If this option is not provided and `cryptSharedLibRequired` is not specified, * the AutoEncrypter will attempt to spawn and/or use mongocryptd according * to the mongocryptd-specific `extraOptions` options. * * Specifying a path prevents mongocryptd from being used as a fallback. * * Requires the MongoDB Crypt shared library, available in MongoDB 6.0 or higher. */ cryptSharedLibPath?: string; /** * If specified, never use mongocryptd and instead fail when the MongoDB Crypt * shared library could not be loaded. * * This is always true when `cryptSharedLibPath` is specified. * * Requires the MongoDB Crypt shared library, available in MongoDB 6.0 or higher. */ cryptSharedLibRequired?: boolean; /* Excluded from this release type: cryptSharedLibSearchPaths */};
property keyVaultClient
keyVaultClient?: MongoClient;
A
MongoClient
used to fetch keys from a key vault
property keyVaultNamespace
keyVaultNamespace?: string;
The namespace where keys are stored in the key vault
property kmsProviders
kmsProviders?: KMSProviders;
Configuration options that are used by specific KMS providers during key generation, encryption, and decryption.
property options
options?: { /** An optional hook to catch logging messages from the underlying encryption engine */ logger?: (level: AutoEncryptionLoggerLevel, message: string) => void;};
property proxyOptions
proxyOptions?: ProxyOptions;
property schemaMap
schemaMap?: Document;
A map of namespaces to a local JSON schema for encryption
**NOTE**: Supplying options.schemaMap provides more security than relying on JSON Schemas obtained from the server. It protects against a malicious server advertising a false JSON Schema, which could trick the client into sending decrypted data that should be encrypted. Schemas supplied in the schemaMap only apply to configuring automatic encryption for Client-Side Field Level Encryption. Other validation rules in the JSON schema will not be enforced by the driver and will result in an error.
property tlsOptions
tlsOptions?: CSFLEKMSTlsOptions;
The TLS options to use connecting to the KMS provider
interface AWSEncryptionKeyOptions
interface AWSEncryptionKeyOptions {}
Configuration options for making an AWS encryption key
Modifiers
@public
interface AWSKMSProviderConfiguration
interface AWSKMSProviderConfiguration {}
Modifiers
@public
property accessKeyId
accessKeyId: string;
The access key used for the AWS KMS provider
property secretAccessKey
secretAccessKey: string;
The secret access key used for the AWS KMS provider
property sessionToken
sessionToken?: string;
An optional AWS session token that will be used as the X-Amz-Security-Token header for AWS requests.
interface AzureEncryptionKeyOptions
interface AzureEncryptionKeyOptions {}
Configuration options for making an Azure encryption key
Modifiers
@public
property keyName
keyName: string;
Key name
property keyVaultEndpoint
keyVaultEndpoint: string;
Key vault URL, typically
<name>.vault.azure.net
property keyVersion
keyVersion?: string | undefined;
Key version
interface BSONSerializeOptions
interface BSONSerializeOptions extends Omit<SerializeOptions, 'index'>, Omit< DeserializeOptions, | 'evalFunctions' | 'cacheFunctions' | 'cacheFunctionsCrc32' | 'allowObjectSmallerThanBufferSize' | 'index' | 'validation' > {}
BSON Serialization options.
Modifiers
@public
property enableUtf8Validation
enableUtf8Validation?: boolean;
Enable utf8 validation when deserializing BSON documents. Defaults to true.
property raw
raw?: boolean;
Enabling the raw option will return a [Node.js Buffer](https://nodejs.org/api/buffer.html) which is allocated using [allocUnsafe API](https://nodejs.org/api/buffer.html#static-method-bufferallocunsafesize). See this section from the [Node.js Docs here](https://nodejs.org/api/buffer.html#what-makes-bufferallocunsafe-and-bufferallocunsafeslow-unsafe) for more detail about what "unsafe" refers to in this context. If you need to maintain your own editable clone of the bytes returned for an extended life time of the process, it is recommended you allocate your own buffer and clone the contents:
Remarks
Please note there is a known limitation where this option cannot be used at the MongoClient level (see [NODE-3946](https://jira.mongodb.org/browse/NODE-3946)). It does correctly work at
Db
,Collection
, and per operation the same as other BSON options work.Example 1
const raw = await collection.findOne({}, { raw: true });const myBuffer = Buffer.alloc(raw.byteLength);myBuffer.set(raw, 0);// Only save and use `myBuffer` beyond this point
interface BulkWriteOperationError
interface BulkWriteOperationError {}
Modifiers
@public
interface BulkWriteOptions
interface BulkWriteOptions extends CommandOperationOptions {}
Modifiers
@public
property bypassDocumentValidation
bypassDocumentValidation?: boolean;
Allow driver to bypass schema validation.
property forceServerObjectId
forceServerObjectId?: boolean;
Force server to assign _id values instead of driver.
property let
let?: Document;
Map of parameter names and values that can be accessed using $$var (requires MongoDB 5.0).
property ordered
ordered?: boolean;
If true, when an insert fails, don't execute the remaining writes. If false, continue with remaining inserts when one fails.
interface ChangeStreamCollModDocument
interface ChangeStreamCollModDocument extends ChangeStreamDocumentCommon, ChangeStreamDocumentCollectionUUID {}
Only present when the
showExpandedEvents
flag is enabled.See Also
https://www.mongodb.com/docs/manual/reference/change-events/modify/#mongodb-data-modify
Modifiers
@public
property operationType
operationType: 'modify';
Describes the type of operation represented in this change notification
interface ChangeStreamCreateDocument
interface ChangeStreamCreateDocument extends ChangeStreamDocumentCommon, ChangeStreamDocumentCollectionUUID {}
See Also
https://www.mongodb.com/docs/manual/reference/change-events/create/#mongodb-data-create
Modifiers
@public
property operationType
operationType: 'create';
Describes the type of operation represented in this change notification
interface ChangeStreamCreateIndexDocument
interface ChangeStreamCreateIndexDocument extends ChangeStreamDocumentCommon, ChangeStreamDocumentCollectionUUID, ChangeStreamDocumentOperationDescription {}
Only present when the
showExpandedEvents
flag is enabled.See Also
https://www.mongodb.com/docs/manual/reference/change-events/createIndexes/#mongodb-data-createIndexes
Modifiers
@public
property operationType
operationType: 'createIndexes';
Describes the type of operation represented in this change notification
interface ChangeStreamDeleteDocument
interface ChangeStreamDeleteDocument<TSchema extends Document = Document> extends ChangeStreamDocumentCommon, ChangeStreamDocumentKey<TSchema>, ChangeStreamDocumentCollectionUUID {}
See Also
https://www.mongodb.com/docs/manual/reference/change-events/#delete-event
Modifiers
@public
property fullDocumentBeforeChange
fullDocumentBeforeChange?: TSchema;
Contains the pre-image of the modified or deleted document if the pre-image is available for the change event and either 'required' or 'whenAvailable' was specified for the 'fullDocumentBeforeChange' option when creating the change stream. If 'whenAvailable' was specified but the pre-image is unavailable, this will be explicitly set to null.
property ns
ns: ChangeStreamNameSpace;
Namespace the delete event occurred on
property operationType
operationType: 'delete';
Describes the type of operation represented in this change notification
interface ChangeStreamDocumentCollectionUUID
interface ChangeStreamDocumentCollectionUUID {}
Modifiers
@public
property collectionUUID
collectionUUID: Binary;
The UUID (Binary subtype 4) of the collection that the operation was performed on.
Only present when the
showExpandedEvents
flag is enabled.**NOTE:** collectionUUID will be converted to a NodeJS Buffer if the promoteBuffers flag is enabled.
6.1.0
interface ChangeStreamDocumentCommon
interface ChangeStreamDocumentCommon {}
Modifiers
@public
property clusterTime
clusterTime?: Timestamp;
The timestamp from the oplog entry associated with the event. For events that happened as part of a multi-document transaction, the associated change stream notifications will have the same clusterTime value, namely the time when the transaction was committed. On a sharded cluster, events that occur on different shards can have the same clusterTime but be associated with different transactions or even not be associated with any transaction. To identify events for a single transaction, you can use the combination of lsid and txnNumber in the change stream event document.
property lsid
lsid?: ServerSessionId;
The identifier for the session associated with the transaction. Only present if the operation is part of a multi-document transaction.
property splitEvent
splitEvent?: ChangeStreamSplitEvent;
When the change stream's backing aggregation pipeline contains the $changeStreamSplitLargeEvent stage, events larger than 16MB will be split into multiple events and contain the following information about which fragment the current event is.
property txnNumber
txnNumber?: number;
The transaction number. Only present if the operation is part of a multi-document transaction.
**NOTE:** txnNumber can be a Long if promoteLongs is set to false
interface ChangeStreamDocumentKey
interface ChangeStreamDocumentKey<TSchema extends Document = Document> {}
Modifiers
@public
property documentKey
documentKey: { _id: InferIdType<TSchema>; [shardKey: string]: any;};
For unsharded collections this contains a single field
_id
. For sharded collections, this will contain all the components of the shard key
interface ChangeStreamDocumentOperationDescription
interface ChangeStreamDocumentOperationDescription {}
Modifiers
@public
property operationDescription
operationDescription?: Document;
An description of the operation.
Only present when the
showExpandedEvents
flag is enabled.6.1.0
interface ChangeStreamDropDatabaseDocument
interface ChangeStreamDropDatabaseDocument extends ChangeStreamDocumentCommon {}
See Also
https://www.mongodb.com/docs/manual/reference/change-events/#dropdatabase-event
Modifiers
@public
property ns
ns: { db: string;};
The database dropped
property operationType
operationType: 'dropDatabase';
Describes the type of operation represented in this change notification
interface ChangeStreamDropDocument
interface ChangeStreamDropDocument extends ChangeStreamDocumentCommon, ChangeStreamDocumentCollectionUUID {}
See Also
https://www.mongodb.com/docs/manual/reference/change-events/#drop-event
Modifiers
@public
property ns
ns: ChangeStreamNameSpace;
Namespace the drop event occurred on
property operationType
operationType: 'drop';
Describes the type of operation represented in this change notification
interface ChangeStreamDropIndexDocument
interface ChangeStreamDropIndexDocument extends ChangeStreamDocumentCommon, ChangeStreamDocumentCollectionUUID, ChangeStreamDocumentOperationDescription {}
Only present when the
showExpandedEvents
flag is enabled.See Also
https://www.mongodb.com/docs/manual/reference/change-events/dropIndexes/#mongodb-data-dropIndexes
Modifiers
@public
property operationType
operationType: 'dropIndexes';
Describes the type of operation represented in this change notification
interface ChangeStreamInsertDocument
interface ChangeStreamInsertDocument<TSchema extends Document = Document> extends ChangeStreamDocumentCommon, ChangeStreamDocumentKey<TSchema>, ChangeStreamDocumentCollectionUUID {}
See Also
https://www.mongodb.com/docs/manual/reference/change-events/#insert-event
Modifiers
@public
property fullDocument
fullDocument: TSchema;
This key will contain the document being inserted
property ns
ns: ChangeStreamNameSpace;
Namespace the insert event occurred on
property operationType
operationType: 'insert';
Describes the type of operation represented in this change notification
interface ChangeStreamInvalidateDocument
interface ChangeStreamInvalidateDocument extends ChangeStreamDocumentCommon {}
See Also
https://www.mongodb.com/docs/manual/reference/change-events/#invalidate-event
Modifiers
@public
property operationType
operationType: 'invalidate';
Describes the type of operation represented in this change notification
interface ChangeStreamNameSpace
interface ChangeStreamNameSpace {}
Modifiers
@public
interface ChangeStreamOptions
interface ChangeStreamOptions extends Omit<AggregateOptions, 'writeConcern'> {}
Options that can be passed to a ChangeStream. Note that startAfter, resumeAfter, and startAtOperationTime are all mutually exclusive, and the server will error if more than one is specified.
Modifiers
@public
property batchSize
batchSize?: number;
The number of documents to return per batch.
See Also
https://www.mongodb.com/docs/manual/reference/command/aggregate
property fullDocument
fullDocument?: string;
Allowed values: 'updateLookup', 'whenAvailable', 'required'.
When set to 'updateLookup', the change notification for partial updates will include both a delta describing the changes to the document as well as a copy of the entire document that was changed from some time after the change occurred.
When set to 'whenAvailable', configures the change stream to return the post-image of the modified document for replace and update change events if the post-image for this event is available.
When set to 'required', the same behavior as 'whenAvailable' except that an error is raised if the post-image is not available.
property fullDocumentBeforeChange
fullDocumentBeforeChange?: string;
Allowed values: 'whenAvailable', 'required', 'off'.
The default is to not send a value, which is equivalent to 'off'.
When set to 'whenAvailable', configures the change stream to return the pre-image of the modified document for replace, update, and delete change events if it is available.
When set to 'required', the same behavior as 'whenAvailable' except that an error is raised if the pre-image is not available.
property maxAwaitTimeMS
maxAwaitTimeMS?: number;
The maximum amount of time for the server to wait on new documents to satisfy a change stream query.
property resumeAfter
resumeAfter?: ResumeToken;
Allows you to start a changeStream after a specified event.
See Also
https://www.mongodb.com/docs/manual/changeStreams/#resumeafter-for-change-streams
property showExpandedEvents
showExpandedEvents?: boolean;
When enabled, configures the change stream to include extra change events.
- createIndexes - dropIndexes - modify - create - shardCollection - reshardCollection - refineCollectionShardKey
property startAfter
startAfter?: ResumeToken;
Similar to resumeAfter, but will allow you to start after an invalidated event.
See Also
https://www.mongodb.com/docs/manual/changeStreams/#startafter-for-change-streams
property startAtOperationTime
startAtOperationTime?: OperationTime;
Will start the changeStream after the specified operationTime.
interface ChangeStreamRefineCollectionShardKeyDocument
interface ChangeStreamRefineCollectionShardKeyDocument extends ChangeStreamDocumentCommon, ChangeStreamDocumentCollectionUUID, ChangeStreamDocumentOperationDescription {}
See Also
https://www.mongodb.com/docs/manual/reference/change-events/refineCollectionShardKey/#mongodb-data-refineCollectionShardKey
Modifiers
@public
property operationType
operationType: 'refineCollectionShardKey';
Describes the type of operation represented in this change notification
interface ChangeStreamRenameDocument
interface ChangeStreamRenameDocument extends ChangeStreamDocumentCommon, ChangeStreamDocumentCollectionUUID {}
See Also
https://www.mongodb.com/docs/manual/reference/change-events/#rename-event
Modifiers
@public
property ns
ns: ChangeStreamNameSpace;
The "from" namespace that the rename occurred on
property operationType
operationType: 'rename';
Describes the type of operation represented in this change notification
property to
to: { db: string; coll: string;};
The new name for the
ns.coll
collection
interface ChangeStreamReplaceDocument
interface ChangeStreamReplaceDocument<TSchema extends Document = Document> extends ChangeStreamDocumentCommon, ChangeStreamDocumentKey<TSchema> {}
See Also
https://www.mongodb.com/docs/manual/reference/change-events/#replace-event
Modifiers
@public
property fullDocument
fullDocument: TSchema;
The fullDocument of a replace event represents the document after the insert of the replacement document
property fullDocumentBeforeChange
fullDocumentBeforeChange?: TSchema;
Contains the pre-image of the modified or deleted document if the pre-image is available for the change event and either 'required' or 'whenAvailable' was specified for the 'fullDocumentBeforeChange' option when creating the change stream. If 'whenAvailable' was specified but the pre-image is unavailable, this will be explicitly set to null.
property ns
ns: ChangeStreamNameSpace;
Namespace the replace event occurred on
property operationType
operationType: 'replace';
Describes the type of operation represented in this change notification
interface ChangeStreamReshardCollectionDocument
interface ChangeStreamReshardCollectionDocument extends ChangeStreamDocumentCommon, ChangeStreamDocumentCollectionUUID, ChangeStreamDocumentOperationDescription {}
See Also
https://www.mongodb.com/docs/manual/reference/change-events/reshardCollection/#mongodb-data-reshardCollection
Modifiers
@public
property operationType
operationType: 'reshardCollection';
Describes the type of operation represented in this change notification
interface ChangeStreamShardCollectionDocument
interface ChangeStreamShardCollectionDocument extends ChangeStreamDocumentCommon, ChangeStreamDocumentCollectionUUID, ChangeStreamDocumentOperationDescription {}
See Also
https://www.mongodb.com/docs/manual/reference/change-events/shardCollection/#mongodb-data-shardCollection
Modifiers
@public
property operationType
operationType: 'shardCollection';
Describes the type of operation represented in this change notification
interface ChangeStreamSplitEvent
interface ChangeStreamSplitEvent {}
Modifiers
@public
interface ChangeStreamUpdateDocument
interface ChangeStreamUpdateDocument<TSchema extends Document = Document> extends ChangeStreamDocumentCommon, ChangeStreamDocumentKey<TSchema>, ChangeStreamDocumentCollectionUUID {}
See Also
https://www.mongodb.com/docs/manual/reference/change-events/#update-event
Modifiers
@public
property fullDocument
fullDocument?: TSchema;
This is only set if
fullDocument
is set to'updateLookup'
Contains the point-in-time post-image of the modified document if the post-image is available and either 'required' or 'whenAvailable' was specified for the 'fullDocument' option when creating the change stream.
property fullDocumentBeforeChange
fullDocumentBeforeChange?: TSchema;
Contains the pre-image of the modified or deleted document if the pre-image is available for the change event and either 'required' or 'whenAvailable' was specified for the 'fullDocumentBeforeChange' option when creating the change stream. If 'whenAvailable' was specified but the pre-image is unavailable, this will be explicitly set to null.
property ns
ns: ChangeStreamNameSpace;
Namespace the update event occurred on
property operationType
operationType: 'update';
Describes the type of operation represented in this change notification
property updateDescription
updateDescription: UpdateDescription<TSchema>;
Contains a description of updated and removed fields in this operation
interface ClientBulkWriteError
interface ClientBulkWriteError {}
Modifiers
@public
interface ClientBulkWriteOptions
interface ClientBulkWriteOptions extends CommandOperationOptions {}
Modifiers
@public
property bypassDocumentValidation
bypassDocumentValidation?: boolean;
Allow driver to bypass schema validation.
property let
let?: Document;
Map of parameter names and values that can be accessed using $$var (requires MongoDB 5.0).
property ordered
ordered?: boolean;
If true, when an insert fails, don't execute the remaining writes. If false, continue with remaining inserts when one fails.
property verboseResults
verboseResults?: boolean;
Whether detailed results for each successful operation should be included in the returned BulkWriteResult.
interface ClientBulkWriteResult
interface ClientBulkWriteResult {}
Modifiers
@public
property acknowledged
readonly acknowledged: boolean;
Whether the bulk write was acknowledged.
property deletedCount
readonly deletedCount: number;
The total number of documents deleted across all delete operations.
property deleteResults
readonly deleteResults?: ReadonlyMap<number, ClientDeleteResult>;
The results of each individual delete operation that was successfully performed.
property insertedCount
readonly insertedCount: number;
The total number of documents inserted across all insert operations.
property insertResults
readonly insertResults?: ReadonlyMap<number, ClientInsertOneResult>;
The results of each individual insert operation that was successfully performed.
property matchedCount
readonly matchedCount: number;
The total number of documents matched across all update operations.
property modifiedCount
readonly modifiedCount: number;
The total number of documents modified across all update operations.
property updateResults
readonly updateResults?: ReadonlyMap<number, ClientUpdateResult>;
The results of each individual update operation that was successfully performed.
property upsertedCount
readonly upsertedCount: number;
The total number of documents upserted across all update operations.
interface ClientDeleteManyModel
interface ClientDeleteManyModel<TSchema> extends ClientWriteModel {}
Modifiers
@public
property collation
collation?: CollationOptions;
Specifies a collation.
property filter
filter: Filter<TSchema>;
The filter used to determine if a document should be deleted. For a deleteMany operation, all matches are removed.
property hint
hint?: Hint;
The index to use. If specified, then the query system will only consider plans using the hinted index.
property name
name: 'deleteMany';
interface ClientDeleteOneModel
interface ClientDeleteOneModel<TSchema> extends ClientWriteModel {}
Modifiers
@public
property collation
collation?: CollationOptions;
Specifies a collation.
property filter
filter: Filter<TSchema>;
The filter used to determine if a document should be deleted. For a deleteOne operation, the first match is removed.
property hint
hint?: Hint;
The index to use. If specified, then the query system will only consider plans using the hinted index.
property name
name: 'deleteOne';
interface ClientDeleteResult
interface ClientDeleteResult {}
Modifiers
@public
property deletedCount
deletedCount: number;
The number of documents that were deleted.
interface ClientEncryptionCreateDataKeyProviderOptions
interface ClientEncryptionCreateDataKeyProviderOptions {}
Options to provide when creating a new data key.
Modifiers
@public
property keyAltNames
keyAltNames?: string[] | undefined;
An optional list of string alternate names used to reference a key. If a key is created with alternate names, then encryption may refer to the key by the unique alternate name instead of by _id.
property keyMaterial
keyMaterial?: Buffer | Binary;
Modifiers
@experimental
property masterKey
masterKey?: | AWSEncryptionKeyOptions | AzureEncryptionKeyOptions | GCPEncryptionKeyOptions | KMIPEncryptionKeyOptions | undefined;
Identifies a new KMS-specific key used to encrypt the new data key
interface ClientEncryptionEncryptOptions
interface ClientEncryptionEncryptOptions {}
Options to provide when encrypting data.
Modifiers
@public
property algorithm
algorithm: | 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' | 'AEAD_AES_256_CBC_HMAC_SHA_512-Random' | 'Indexed' | 'Unindexed' | 'Range';
The algorithm to use for encryption.
property contentionFactor
contentionFactor?: bigint | number;
The contention factor.
property keyAltName
keyAltName?: string;
A unique string name corresponding to an already existing dataKey.
property keyId
keyId?: Binary;
The id of the Binary dataKey to use for encryption
property queryType
queryType?: 'equality' | 'range';
The query type.
property rangeOptions
rangeOptions?: RangeOptions;
The index options for a Queryable Encryption field supporting "range" queries.
interface ClientEncryptionOptions
interface ClientEncryptionOptions {}
Additional settings to provide when creating a new
ClientEncryption
instance.Modifiers
@public
property keyVaultClient
keyVaultClient?: MongoClient | undefined;
A MongoClient used to fetch keys from a key vault. Defaults to client.
property keyVaultNamespace
keyVaultNamespace: string;
The namespace of the key vault, used to store encryption keys
property kmsProviders
kmsProviders?: KMSProviders;
Options for specific KMS providers to use
property proxyOptions
proxyOptions?: ProxyOptions;
Options for specifying a Socks5 proxy to use for connecting to the KMS.
property timeoutMS
timeoutMS?: number;
The timeout setting to be used for all the operations on ClientEncryption.
When provided,
timeoutMS
is used as the timeout for each operation executed on the ClientEncryption object. For example:const clientEncryption = new ClientEncryption(client, {timeoutMS: 1_000kmsProviders: { local: { key: '<KEY>' } }});// `1_000` is used as the timeout for createDataKey callawait clientEncryption.createDataKey('local');If
timeoutMS
is configured on the provided client, the client'stimeoutMS
value will be used unlesstimeoutMS
is also provided as a client encryption option.const client = new MongoClient('<uri>', { timeoutMS: 2_000 });// timeoutMS is set to 1_000 on clientEncryptionconst clientEncryption = new ClientEncryption(client, {timeoutMS: 1_000kmsProviders: { local: { key: '<KEY>' } }});Modifiers
@experimental
property tlsOptions
tlsOptions?: CSFLEKMSTlsOptions;
TLS options for kms providers to use.
interface ClientEncryptionRewrapManyDataKeyProviderOptions
interface ClientEncryptionRewrapManyDataKeyProviderOptions {}
Modifiers
@public
@experimental
interface ClientEncryptionRewrapManyDataKeyResult
interface ClientEncryptionRewrapManyDataKeyResult {}
Modifiers
@public
@experimental
property bulkWriteResult
bulkWriteResult?: BulkWriteResult;
The result of rewrapping data keys. If unset, no keys matched the filter.
interface ClientInsertOneModel
interface ClientInsertOneModel<TSchema> extends ClientWriteModel {}
Modifiers
@public
interface ClientInsertOneResult
interface ClientInsertOneResult {}
Modifiers
@public
property insertedId
insertedId: any;
The _id of the inserted document.
interface ClientMetadata
interface ClientMetadata {}
See Also
https://github.com/mongodb/specifications/blob/master/source/mongodb-handshake/handshake.md#hello-command
Modifiers
@public
property application
application?: { name: string;};
property driver
driver: { name: string; version: string;};
property env
env?: { name: 'aws.lambda' | 'gcp.func' | 'azure.func' | 'vercel'; timeout_sec?: Int32; memory_mb?: Int32; region?: string; url?: string;};
FaaS environment information
property os
os: { type: string; name?: NodeJS.Platform; architecture?: string; version?: string;};
property platform
platform: string;
interface ClientMetadataOptions
interface ClientMetadataOptions {}
Modifiers
@public
property appName
appName?: string;
property driverInfo
driverInfo?: { name?: string; version?: string; platform?: string;};
interface ClientReplaceOneModel
interface ClientReplaceOneModel<TSchema> extends ClientWriteModel {}
Modifiers
@public
property collation
collation?: CollationOptions;
Specifies a collation.
property filter
filter: Filter<TSchema>;
The filter used to determine if a document should be replaced. For a replaceOne operation, the first match is replaced.
property hint
hint?: Hint;
The index to use. If specified, then the query system will only consider plans using the hinted index.
property name
name: 'replaceOne';
property replacement
replacement: WithoutId<TSchema>;
The document with which to replace the matched document.
property upsert
upsert?: boolean;
When true, creates a new document if no document matches the query.
interface ClientSessionOptions
interface ClientSessionOptions {}
Modifiers
@public
property causalConsistency
causalConsistency?: boolean;
Whether causal consistency should be enabled on this session
property defaultTimeoutMS
defaultTimeoutMS?: number;
An overriding timeoutMS value to use for a client-side timeout. If not provided the session uses the timeoutMS specified on the MongoClient.
Modifiers
@public
@experimental
property defaultTransactionOptions
defaultTransactionOptions?: TransactionOptions;
The default TransactionOptions to use for transactions started on this session.
property snapshot
snapshot?: boolean;
Whether all read operations should be read from the same snapshot for this session (NOTE: not compatible with
causalConsistency=true
)
interface ClientUpdateManyModel
interface ClientUpdateManyModel<TSchema> extends ClientWriteModel {}
Modifiers
@public
property arrayFilters
arrayFilters?: Document[];
A set of filters specifying to which array elements an update should apply.
property collation
collation?: CollationOptions;
Specifies a collation.
property filter
filter: Filter<TSchema>;
The filter used to determine if a document should be updated. For an updateMany operation, all matches are updated.
property hint
hint?: Hint;
The index to use. If specified, then the query system will only consider plans using the hinted index.
property name
name: 'updateMany';
property update
update: UpdateFilter<TSchema> | Document[];
The modifications to apply. The value can be either: UpdateFilter - A document that contains update operator expressions, Document[] - an aggregation pipeline.
property upsert
upsert?: boolean;
When true, creates a new document if no document matches the query.
interface ClientUpdateOneModel
interface ClientUpdateOneModel<TSchema> extends ClientWriteModel {}
Modifiers
@public
property arrayFilters
arrayFilters?: Document[];
A set of filters specifying to which array elements an update should apply.
property collation
collation?: CollationOptions;
Specifies a collation.
property filter
filter: Filter<TSchema>;
The filter used to determine if a document should be updated. For an updateOne operation, the first match is updated.
property hint
hint?: Hint;
The index to use. If specified, then the query system will only consider plans using the hinted index.
property name
name: 'updateOne';
property update
update: UpdateFilter<TSchema> | Document[];
The modifications to apply. The value can be either: UpdateFilter - A document that contains update operator expressions, Document[] - an aggregation pipeline.
property upsert
upsert?: boolean;
When true, creates a new document if no document matches the query.
interface ClientUpdateResult
interface ClientUpdateResult {}
Modifiers
@public
property didUpsert
didUpsert: boolean;
Determines if the upsert did include an _id, which includes the case of the _id being null.
property matchedCount
matchedCount: number;
The number of documents that matched the filter.
property modifiedCount
modifiedCount: number;
The number of documents that were modified.
property upsertedId
upsertedId?: any;
The _id field of the upserted document if an upsert occurred.
It MUST be possible to discern between a BSON Null upserted ID value and this field being unset. If necessary, drivers MAY add a didUpsert boolean field to differentiate between these two cases.
interface ClientWriteModel
interface ClientWriteModel {}
Modifiers
@public
property namespace
namespace: string;
The namespace for the write.
A namespace is a combination of the database name and the name of the collection:
<database-name>.<collection>
. All documents belong to a namespace.See Also
https://www.mongodb.com/docs/manual/reference/limits/#std-label-faq-dev-namespace
interface CloseOptions
interface CloseOptions {}
Modifiers
@public
Deprecated
This interface is deprecated and will be removed in a future release as it is not used in the driver
property force
force?: boolean;
interface ClusteredCollectionOptions
interface ClusteredCollectionOptions extends Document {}
Configuration options for clustered collections
See Also
https://www.mongodb.com/docs/manual/core/clustered-collections/
Modifiers
@public
interface ClusterTime
interface ClusterTime {}
Gossiped in component for the cluster time tracking the state of user databases across the cluster. It may optionally include a signature identifying the process that generated such a value.
Modifiers
@public
property clusterTime
clusterTime: Timestamp;
property signature
signature?: { hash: Binary; keyId: Long;};
Used to validate the identity of a request or response's ClusterTime.
interface CollationOptions
interface CollationOptions {}
Modifiers
@public
property alternate
alternate?: string;
property backwards
backwards?: boolean;
property caseFirst
caseFirst?: string;
property caseLevel
caseLevel?: boolean;
property locale
locale: string;
property maxVariable
maxVariable?: string;
property normalization
normalization?: boolean;
property numericOrdering
numericOrdering?: boolean;
property strength
strength?: number;
interface CollectionInfo
interface CollectionInfo extends Document {}
Modifiers
@public
interface CollectionOptions
interface CollectionOptions extends BSONSerializeOptions, WriteConcernOptions {}
Modifiers
@public
property readConcern
readConcern?: ReadConcernLike;
Specify a read concern for the collection. (only MongoDB 3.2 or higher supported)
property readPreference
readPreference?: ReadPreferenceLike;
The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
property timeoutMS
timeoutMS?: number;
Specifies the time an operation will run until it throws a timeout error
Modifiers
@experimental
interface CommandOperationOptions
interface CommandOperationOptions extends OperationOptions, WriteConcernOptions, ExplainOptions {}
Modifiers
@public
property authdb
authdb?: string;
property collation
collation?: CollationOptions;
Collation
property comment
comment?: unknown;
Comment to apply to the operation.
In server versions pre-4.4, 'comment' must be string. A server error will be thrown if any other type is provided.
In server versions 4.4 and above, 'comment' can be any valid BSON type.
property dbName
dbName?: string;
property maxTimeMS
maxTimeMS?: number;
maxTimeMS is a server-side time limit in milliseconds for processing an operation.
property noResponse
noResponse?: boolean;
property readConcern
readConcern?: ReadConcernLike;
Specify a read concern and level for the collection. (only MongoDB 3.2 or higher supported)
property retryWrites
retryWrites?: boolean;
Should retry failed writes
interface ConnectionOptions
interface ConnectionOptions extends SupportedNodeConnectionOptions, StreamDescriptionOptions, ProxyOptions {}
Modifiers
@public
property cancellationToken
cancellationToken?: CancellationToken;
property connectTimeoutMS
connectTimeoutMS?: number;
property credentials
credentials?: MongoCredentials;
property generation
generation: number;
property hostAddress
hostAddress: HostAddress;
property id
id: number | '<monitor>';
property metadata
metadata: ClientMetadata;
property monitorCommands
monitorCommands: boolean;
property noDelay
noDelay?: boolean;
property serverApi
serverApi?: ServerApi;
property socketTimeoutMS
socketTimeoutMS?: number;
property tls
tls: boolean;
interface ConnectionPoolOptions
interface ConnectionPoolOptions extends Omit<ConnectionOptions, 'id' | 'generation'> {}
Modifiers
@public
property loadBalanced
loadBalanced: boolean;
If we are in load balancer mode.
property maxConnecting
maxConnecting: number;
The maximum number of connections that may be in the process of being established concurrently by the connection pool.
property maxIdleTimeMS
maxIdleTimeMS: number;
The maximum amount of time a connection should remain idle in the connection pool before being marked idle.
property maxPoolSize
maxPoolSize: number;
The maximum number of connections that may be associated with a pool at a given time. This includes in use and available connections.
property minPoolSize
minPoolSize: number;
The minimum number of connections that MUST exist at any moment in a single connection pool.
property waitQueueTimeoutMS
waitQueueTimeoutMS: number;
The maximum amount of time operation execution should wait for a connection to become available. The default is 0 which means there is no limit.
interface ConnectOptions
interface ConnectOptions {}
Modifiers
@public
property readPreference
readPreference?: ReadPreference;
interface CountDocumentsOptions
interface CountDocumentsOptions extends AggregateOptions {}
Modifiers
@public
interface CountOptions
interface CountOptions extends CommandOperationOptions {}
Modifiers
@public
interface CreateCollectionOptions
interface CreateCollectionOptions extends CommandOperationOptions {}
Modifiers
@public
property autoIndexId
autoIndexId?: boolean;
Deprecated
Create an index on the _id field of the document. This option is deprecated in MongoDB 3.2+ and will be removed once no longer supported by the server.
property capped
capped?: boolean;
Create a capped collection
property changeStreamPreAndPostImages
changeStreamPreAndPostImages?: { enabled: boolean;};
If set, enables pre-update and post-update document events to be included for any change streams that listen on this collection.
property clusteredIndex
clusteredIndex?: ClusteredCollectionOptions;
A document specifying configuration options for clustered collections. For MongoDB 5.3 and above.
property encryptedFields
encryptedFields?: Document;
Modifiers
@experimental
property expireAfterSeconds
expireAfterSeconds?: number;
The number of seconds after which a document in a timeseries or clustered collection expires.
property flags
flags?: number;
Available for the MMAPv1 storage engine only to set the usePowerOf2Sizes and the noPadding flag
property indexOptionDefaults
indexOptionDefaults?: Document;
Allows users to specify a default configuration for indexes when creating a collection
property max
max?: number;
The maximum number of documents in the capped collection
property pipeline
pipeline?: Document[];
An array that consists of the aggregation pipeline stage. Creates the view by applying the specified pipeline to the viewOn collection or view
property pkFactory
pkFactory?: PkFactory;
A primary key factory function for generation of custom _id keys.
property size
size?: number;
The size of the capped collection in bytes
property storageEngine
storageEngine?: Document;
Allows users to specify configuration to the storage engine on a per-collection basis when creating a collection
property timeseries
timeseries?: TimeSeriesCollectionOptions;
A document specifying configuration options for timeseries collections.
property validationAction
validationAction?: string;
Determines whether to error on invalid documents or just warn about the violations but allow invalid documents to be inserted
property validationLevel
validationLevel?: string;
Determines how strictly MongoDB applies the validation rules to existing documents during an update
property validator
validator?: Document;
Allows users to specify validation rules or expressions for the collection. For more information, see Document Validation
property viewOn
viewOn?: string;
The name of the source collection or view from which to create the view. The name is not the full namespace of the collection or view (i.e., does not include the database name and implies the same database as the view to create)
interface CreateIndexesOptions
interface CreateIndexesOptions extends Omit<CommandOperationOptions, 'writeConcern'> {}
Modifiers
@public
property '2dsphereIndexVersion'
'2dsphereIndexVersion'?: number;
property background
background?: boolean;
Creates the index in the background, yielding whenever possible.
property bits
bits?: number;
property bucketSize
bucketSize?: number;
property commitQuorum
commitQuorum?: number | string;
(MongoDB 4.4. or higher) Specifies how many data-bearing members of a replica set, including the primary, must complete the index builds successfully before the primary marks the indexes as ready. This option accepts the same values for the "w" field in a write concern plus "votingMembers", which indicates all voting data-bearing nodes.
property default_language
default_language?: string;
property expireAfterSeconds
expireAfterSeconds?: number;
Allows you to expire data on indexes applied to a data (MongoDB 2.2 or higher)
property hidden
hidden?: boolean;
Specifies that the index should exist on the target collection but should not be used by the query planner when executing operations. (MongoDB 4.4 or higher)
property language_override
language_override?: string;
property max
max?: number;
For geospatial indexes set the high bound for the co-ordinates.
property min
min?: number;
For geospatial indexes set the lower bound for the co-ordinates.
property name
name?: string;
Override the autogenerated index name (useful if the resulting name is larger than 128 bytes)
property partialFilterExpression
partialFilterExpression?: Document;
Creates a partial index based on the given filter object (MongoDB 3.2 or higher)
property sparse
sparse?: boolean;
Creates a sparse index.
property storageEngine
storageEngine?: Document;
Allows users to configure the storage engine on a per-index basis when creating an index. (MongoDB 3.0 or higher)
property textIndexVersion
textIndexVersion?: number;
property unique
unique?: boolean;
Creates an unique index.
property version
version?: number;
Specifies the index version number, either 0 or 1.
property weights
weights?: Document;
property wildcardProjection
wildcardProjection?: Document;
interface CursorStreamOptions
interface CursorStreamOptions {}
Modifiers
@public
method transform
transform: (this: void, doc: Document) => Document;
A transformation method applied to each document emitted by the stream
interface DataKey
interface DataKey {}
The schema for a DataKey in the key vault collection.
Modifiers
@public
property creationDate
creationDate: Date;
property keyAltNames
keyAltNames?: string[];
property keyMaterial
keyMaterial: Binary;
property masterKey
masterKey: Document;
property status
status: number;
property updateDate
updateDate: Date;
property version
version?: number;
interface DbOptions
interface DbOptions extends BSONSerializeOptions, WriteConcernOptions {}
Modifiers
@public
property authSource
authSource?: string;
If the database authentication is dependent on another databaseName.
property forceServerObjectId
forceServerObjectId?: boolean;
Force server to assign _id values instead of driver.
property pkFactory
pkFactory?: PkFactory;
A primary key factory object for generation of custom _id keys.
property readConcern
readConcern?: ReadConcern;
Specify a read concern for the collection. (only MongoDB 3.2 or higher supported)
property readPreference
readPreference?: ReadPreferenceLike;
The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
property retryWrites
retryWrites?: boolean;
Should retry failed writes
property timeoutMS
timeoutMS?: number;
Specifies the time an operation will run until it throws a timeout error
Modifiers
@experimental
interface DbStatsOptions
interface DbStatsOptions extends CommandOperationOptions {}
Modifiers
@public
property scale
scale?: number;
Divide the returned sizes by scale value.
interface DeleteManyModel
interface DeleteManyModel<TSchema extends Document = Document> {}
Modifiers
@public
interface DeleteOneModel
interface DeleteOneModel<TSchema extends Document = Document> {}
Modifiers
@public
interface DeleteOptions
interface DeleteOptions extends CommandOperationOptions, WriteConcernOptions {}
Modifiers
@public
property collation
collation?: CollationOptions;
Specifies the collation to use for the operation
property hint
hint?: string | Document;
Specify that the update query should only consider plans using the hinted index
property let
let?: Document;
Map of parameter names and values that can be accessed using $$var (requires MongoDB 5.0).
property ordered
ordered?: boolean;
If true, when an insert fails, don't execute the remaining writes. If false, continue with remaining inserts when one fails.
interface DeleteResult
interface DeleteResult {}
Modifiers
@public
property acknowledged
acknowledged: boolean;
Indicates whether this write result was acknowledged. If not, then all other members of this result will be undefined.
property deletedCount
deletedCount: number;
The number of documents that were deleted
interface DeleteStatement
interface DeleteStatement {}
Modifiers
@public
property collation
collation?: CollationOptions;
Specifies the collation to use for the operation.
property hint
hint?: Hint;
A document or string that specifies the index to use to support the query predicate.
property limit
limit: number;
The number of matching documents to delete.
property q
q: Document;
The query that matches documents to delete.
interface DriverInfo
interface DriverInfo {}
Modifiers
@public
interface DropCollectionOptions
interface DropCollectionOptions extends CommandOperationOptions {}
Modifiers
@public
property encryptedFields
encryptedFields?: Document;
Modifiers
@experimental
interface EndSessionOptions
interface EndSessionOptions {}
Modifiers
@public
property force
force?: boolean;
property forceClear
forceClear?: boolean;
property timeoutMS
timeoutMS?: number;
Specifies the time an operation will run until it throws a timeout error
interface ErrorDescription
interface ErrorDescription extends Document {}
Modifiers
@public
property $err
$err?: string;
property errInfo
errInfo?: Document;
property errmsg
errmsg?: string;
property errorLabels
errorLabels?: string[];
property message
message?: string;
interface EstimatedDocumentCountOptions
interface EstimatedDocumentCountOptions extends CommandOperationOptions {}
Modifiers
@public
property maxTimeMS
maxTimeMS?: number;
The maximum amount of time to allow the operation to run.
This option is sent only if the caller explicitly provides a value. The default is to not send a value.
interface ExplainCommandOptions
interface ExplainCommandOptions {}
Modifiers
@public
interface ExplainOptions
interface ExplainOptions {}
When set, this configures an explain command. Valid values are boolean (for legacy compatibility, see ExplainVerbosityLike), a string containing the explain verbosity, or an object containing the verbosity and an optional maxTimeMS.
Examples of valid usage:
collection.find({ name: 'john doe' }, { explain: true });collection.find({ name: 'john doe' }, { explain: false });collection.find({ name: 'john doe' }, { explain: 'queryPlanner' });collection.find({ name: 'john doe' }, { explain: { verbosity: 'queryPlanner' } });maxTimeMS can be configured to limit the amount of time the server spends executing an explain by providing an object:
// limits the `explain` command to no more than 2 secondscollection.find({ name: 'john doe' }, {explain: {verbosity: 'queryPlanner',maxTimeMS: 2000}});Modifiers
@public
property explain
explain?: ExplainVerbosityLike | ExplainCommandOptions;
Specifies the verbosity mode for the explain output.
interface FilterOperators
interface FilterOperators<TValue> extends NonObjectIdLikeDocument {}
Modifiers
@public
property $all
$all?: ReadonlyArray<any>;
property $bitsAllClear
$bitsAllClear?: BitwiseFilter;
property $bitsAllSet
$bitsAllSet?: BitwiseFilter;
property $bitsAnyClear
$bitsAnyClear?: BitwiseFilter;
property $bitsAnySet
$bitsAnySet?: BitwiseFilter;
property $elemMatch
$elemMatch?: Document;
property $eq
$eq?: TValue;
property $exists
$exists?: boolean;
When
true
,$exists
matches the documents that contain the field, including documents where the field value is null.
property $expr
$expr?: Record<string, any>;
property $geoIntersects
$geoIntersects?: { $geometry: Document;};
property $geoWithin
$geoWithin?: Document;
property $gt
$gt?: TValue;
property $gte
$gte?: TValue;
property $in
$in?: ReadonlyArray<TValue>;
property $jsonSchema
$jsonSchema?: Record<string, any>;
property $lt
$lt?: TValue;
property $lte
$lte?: TValue;
property $maxDistance
$maxDistance?: number;
property $mod
$mod?: TValue extends number ? [number, number] : never;
property $ne
$ne?: TValue;
property $near
$near?: Document;
property $nearSphere
$nearSphere?: Document;
property $nin
$nin?: ReadonlyArray<TValue>;
property $not
$not?: TValue extends string ? FilterOperators<TValue> | RegExp : FilterOperators<TValue>;
property $options
$options?: TValue extends string ? string : never;
property $rand
$rand?: Record<string, never>;
property $regex
$regex?: TValue extends string ? RegExp | BSONRegExp | string : never;
property $size
$size?: TValue extends ReadonlyArray<any> ? number : never;
property $type
$type?: BSONType | BSONTypeAlias;
interface FindOneAndDeleteOptions
interface FindOneAndDeleteOptions extends CommandOperationOptions {}
Modifiers
@public
property hint
hint?: Document;
An optional hint for query optimization. See the update command reference for more information.
property includeResultMetadata
includeResultMetadata?: boolean;
Return the ModifyResult instead of the modified document. Defaults to false
property let
let?: Document;
Map of parameter names and values that can be accessed using $$var (requires MongoDB 5.0).
property projection
projection?: Document;
Limits the fields to return for all matching documents.
property sort
sort?: Sort;
Determines which document the operation modifies if the query selects multiple documents.
interface FindOneAndReplaceOptions
interface FindOneAndReplaceOptions extends CommandOperationOptions {}
Modifiers
@public
property bypassDocumentValidation
bypassDocumentValidation?: boolean;
Allow driver to bypass schema validation.
property hint
hint?: Document;
An optional hint for query optimization. See the update command reference for more information.
property includeResultMetadata
includeResultMetadata?: boolean;
Return the ModifyResult instead of the modified document. Defaults to false
property let
let?: Document;
Map of parameter names and values that can be accessed using $$var (requires MongoDB 5.0).
property projection
projection?: Document;
Limits the fields to return for all matching documents.
property returnDocument
returnDocument?: ReturnDocument;
When set to 'after', returns the updated document rather than the original. The default is 'before'.
property sort
sort?: Sort;
Determines which document the operation modifies if the query selects multiple documents.
property upsert
upsert?: boolean;
Upsert the document if it does not exist.
interface FindOneAndUpdateOptions
interface FindOneAndUpdateOptions extends CommandOperationOptions {}
Modifiers
@public
property arrayFilters
arrayFilters?: Document[];
Optional list of array filters referenced in filtered positional operators
property bypassDocumentValidation
bypassDocumentValidation?: boolean;
Allow driver to bypass schema validation.
property hint
hint?: Document;
An optional hint for query optimization. See the update command reference for more information.
property includeResultMetadata
includeResultMetadata?: boolean;
Return the ModifyResult instead of the modified document. Defaults to false
property let
let?: Document;
Map of parameter names and values that can be accessed using $$var (requires MongoDB 5.0).
property projection
projection?: Document;
Limits the fields to return for all matching documents.
property returnDocument
returnDocument?: ReturnDocument;
When set to 'after', returns the updated document rather than the original. The default is 'before'.
property sort
sort?: Sort;
Determines which document the operation modifies if the query selects multiple documents.
property upsert
upsert?: boolean;
Upsert the document if it does not exist.
interface FindOptions
interface FindOptions<TSchema extends Document = Document> extends Omit<CommandOperationOptions, 'writeConcern' | 'explain'>, AbstractCursorOptions {}
Modifiers
@public
property allowDiskUse
allowDiskUse?: boolean;
Allows disk use for blocking sort operations exceeding 100MB memory. (MongoDB 3.2 or higher)
property allowPartialResults
allowPartialResults?: boolean;
For queries against a sharded collection, allows the command (or subsequent getMore commands) to return partial results, rather than an error, if one or more queried shards are unavailable.
property awaitData
awaitData?: boolean;
Specify if the cursor is a tailable-await cursor. Requires
tailable
to be true
property batchSize
batchSize?: number;
Set the batchSize for the getMoreCommand when iterating over the query results.
property collation
collation?: CollationOptions;
Specify collation (MongoDB 3.4 or higher) settings for update operation (see 3.4 documentation for available fields).
property explain
explain?: ExplainOptions['explain'];
Specifies the verbosity mode for the explain output.
Deprecated
This API is deprecated in favor of
collection.find().explain()
.
property hint
hint?: Hint;
Tell the query to use specific indexes in the query. Object of indexes to use,
{'_id':1}
property let
let?: Document;
Map of parameter names and values that can be accessed using $$var (requires MongoDB 5.0).
property limit
limit?: number;
Sets the limit of documents returned in the query.
property max
max?: Document;
The exclusive upper bound for a specific index
property maxAwaitTimeMS
maxAwaitTimeMS?: number;
The maximum amount of time for the server to wait on new documents to satisfy a tailable cursor query. Requires
tailable
andawaitData
to be true
property maxTimeMS
maxTimeMS?: number;
Number of milliseconds to wait before aborting the query.
property min
min?: Document;
The inclusive lower bound for a specific index
property noCursorTimeout
noCursorTimeout?: boolean;
The server normally times out idle cursors after an inactivity period (10 minutes) to prevent excess memory use. Set this option to prevent that.
property oplogReplay
oplogReplay?: boolean;
Option to enable an optimized code path for queries looking for a particular range of
ts
values in the oplog. Requirestailable
to be true.Deprecated
Starting from MongoDB 4.4 this flag is not needed and will be ignored.
property projection
projection?: Document;
The fields to return in the query. Object of fields to either include or exclude (one of, not both),
{'a':1, 'b': 1}
**or**{'a': 0, 'b': 0}
property returnKey
returnKey?: boolean;
If true, returns only the index keys in the resulting documents.
property showRecordId
showRecordId?: boolean;
Determines whether to return the record identifier for each document. If true, adds a field $recordId to the returned documents.
property singleBatch
singleBatch?: boolean;
Determines whether to close the cursor after the first batch. Defaults to false.
property skip
skip?: number;
Set to skip N documents ahead in your query (useful for pagination).
property sort
sort?: Sort;
Set to sort the documents coming back from the query. Array of indexes,
[['a', 1]]
etc.
property tailable
tailable?: boolean;
Specify if the cursor is tailable.
property timeout
timeout?: boolean;
Specify if the cursor can timeout.
interface GCPEncryptionKeyOptions
interface GCPEncryptionKeyOptions {}
Configuration options for making an AWS encryption key
Modifiers
@public
property endpoint
endpoint?: string | undefined;
KMS URL, defaults to
https://www.googleapis.com/auth/cloudkms
property keyName
keyName: string;
Key name
property keyRing
keyRing: string;
Key ring name
property keyVersion
keyVersion?: string | undefined;
Key version
property location
location: string;
Location name (e.g. "global")
property projectId
projectId: string;
GCP project ID
interface GridFSBucketOptions
interface GridFSBucketOptions extends WriteConcernOptions {}
Modifiers
@public
property bucketName
bucketName?: string;
The 'files' and 'chunks' collections will be prefixed with the bucket name followed by a dot.
property chunkSizeBytes
chunkSizeBytes?: number;
Number of bytes stored in each chunk. Defaults to 255KB
property readPreference
readPreference?: ReadPreference;
Read preference to be passed to read operations
property timeoutMS
timeoutMS?: number;
Specifies the lifetime duration of a gridFS stream. If any async operations are in progress when this timeout expires, the stream will throw a timeout error.
Modifiers
@experimental
interface GridFSBucketReadStreamOptions
interface GridFSBucketReadStreamOptions {}
Modifiers
@public
property end
end?: number;
0-indexed non-negative byte offset to the end of the file contents to be returned by the stream.
end
is non-inclusive
property skip
skip?: number;
property sort
sort?: Sort;
property start
start?: number;
0-indexed non-negative byte offset from the beginning of the file
property timeoutMS
timeoutMS?: number;
Specifies the time an operation will run until it throws a timeout error
Modifiers
@experimental
interface GridFSBucketReadStreamOptionsWithRevision
interface GridFSBucketReadStreamOptionsWithRevision extends GridFSBucketReadStreamOptions {}
Modifiers
@public
property revision
revision?: number;
The revision number relative to the oldest file with the given filename. 0 gets you the oldest file, 1 gets you the 2nd oldest, -1 gets you the newest.
interface GridFSBucketWriteStreamOptions
interface GridFSBucketWriteStreamOptions extends WriteConcernOptions {}
Modifiers
@public
property aliases
aliases?: string[];
Array of strings to store in the file document's
aliases
field.Deprecated
Will be removed in the next major version. Add an aliases field to the metadata document instead.
property chunkSizeBytes
chunkSizeBytes?: number;
Overwrite this bucket's chunkSizeBytes for this file
property contentType
contentType?: string;
String to store in the file document's
contentType
field.Deprecated
Will be removed in the next major version. Add a contentType field to the metadata document instead.
property id
id?: ObjectId;
Custom file id for the GridFS file.
property metadata
metadata?: Document;
Object to store in the file document's
metadata
field
property timeoutMS
timeoutMS?: number;
Specifies the time an operation will run until it throws a timeout error
Modifiers
@experimental
interface GridFSChunk
interface GridFSChunk {}
Modifiers
@public
interface GridFSFile
interface GridFSFile {}
Modifiers
@public
property aliases
aliases?: string[];
Deprecated
Will be removed in the next major version.
property chunkSize
chunkSize: number;
property contentType
contentType?: string;
Deprecated
Will be removed in the next major version.
property filename
filename: string;
property length
length: number;
property metadata
metadata?: Document;
property uploadDate
uploadDate: Date;
interface HedgeOptions
interface HedgeOptions {}
Modifiers
@public
property enabled
enabled?: boolean;
Explicitly enable or disable hedged reads.
interface IdPInfo
interface IdPInfo {}
The information returned by the server on the IDP server.
Modifiers
@public
property clientId
clientId: string;
A unique client ID for this OIDC client.
property issuer
issuer: string;
A URL which describes the Authentication Server. This identifier should be the iss of provided access tokens, and be viable for RFC8414 metadata discovery and RFC9207 identification.
property requestScopes
requestScopes?: string[];
A list of additional scopes to request from IdP.
interface IdPServerResponse
interface IdPServerResponse {}
The response from the IdP server with the access token and optional expiration time and refresh token.
Modifiers
@public
property accessToken
accessToken: string;
The OIDC access token.
property expiresInSeconds
expiresInSeconds?: number;
The time when the access token expires. For future use.
property refreshToken
refreshToken?: string;
The refresh token, if applicable, to be used by the callback to request a new token from the issuer.
interface IndexDescription
interface IndexDescription extends Pick< CreateIndexesOptions, | 'background' | 'unique' | 'partialFilterExpression' | 'sparse' | 'hidden' | 'expireAfterSeconds' | 'storageEngine' | 'version' | 'weights' | 'default_language' | 'language_override' | 'textIndexVersion' | '2dsphereIndexVersion' | 'bits' | 'min' | 'max' | 'bucketSize' | 'wildcardProjection' > {}
Modifiers
@public
interface IndexInformationOptions
interface IndexInformationOptions extends ListIndexesOptions {}
Modifiers
@public
property full
full?: boolean;
When
true
, an array of index descriptions is returned. Whenfalse
, the driver returns an object that with keys corresponding to index names with values corresponding to the entries of the indexes' key.For example, the given the following indexes:
[ { name: 'a_1', key: { a: 1 } }, { name: 'b_1_c_1' , key: { b: 1, c: 1 } }]When
full
istrue
, the above array is returned. Whenfull
isfalse
, the following is returned:{'a_1': [['a', 1]],'b_1_c_1': [['b', 1], ['c', 1]],}
interface InsertManyResult
interface InsertManyResult<TSchema = Document> {}
Modifiers
@public
property acknowledged
acknowledged: boolean;
Indicates whether this write result was acknowledged. If not, then all other members of this result will be undefined
property insertedCount
insertedCount: number;
The number of inserted documents for this operations
property insertedIds
insertedIds: { [key: number]: InferIdType<TSchema>;};
Map of the index of the inserted document to the id of the inserted document
interface InsertOneModel
interface InsertOneModel<TSchema extends Document = Document> {}
Modifiers
@public
property document
document: OptionalId<TSchema>;
The document to insert.
interface InsertOneOptions
interface InsertOneOptions extends CommandOperationOptions {}
Modifiers
@public
property bypassDocumentValidation
bypassDocumentValidation?: boolean;
Allow driver to bypass schema validation.
property forceServerObjectId
forceServerObjectId?: boolean;
Force server to assign _id values instead of driver.
interface InsertOneResult
interface InsertOneResult<TSchema = Document> {}
Modifiers
@public
property acknowledged
acknowledged: boolean;
Indicates whether this write result was acknowledged. If not, then all other members of this result will be undefined
property insertedId
insertedId: InferIdType<TSchema>;
The identifier that was inserted. If the server generated the identifier, this value will be null as the driver does not have access to that data
interface KMIPEncryptionKeyOptions
interface KMIPEncryptionKeyOptions {}
Configuration options for making a KMIP encryption key
Modifiers
@public
property delegated
delegated?: boolean;
If true, this key should be decrypted by the KMIP server.
Requires
mongodb-client-encryption>=6.0.1
.
property endpoint
endpoint?: string;
Host with optional port.
property keyId
keyId?: string;
keyId is the KMIP Unique Identifier to a 96 byte KMIP Secret Data managed object.
If keyId is omitted, a random 96 byte KMIP Secret Data managed object will be created.
interface KMIPKMSProviderConfiguration
interface KMIPKMSProviderConfiguration {}
Modifiers
@public
property endpoint
endpoint?: string;
The output endpoint string. The endpoint consists of a hostname and port separated by a colon. E.g. "example.com:123". A port is always present.
interface KMSProviders
interface KMSProviders {}
Configuration options that are used by specific KMS providers during key generation, encryption, and decryption.
Named KMS providers _are not supported_ for automatic KMS credential fetching.
Modifiers
@public
property aws
aws?: AWSKMSProviderConfiguration | Record<string, never>;
Configuration options for using 'aws' as your KMS provider
property azure
azure?: AzureKMSProviderConfiguration | Record<string, never>;
Configuration options for using 'azure' as your KMS provider
property gcp
gcp?: GCPKMSProviderConfiguration | Record<string, never>;
Configuration options for using 'gcp' as your KMS provider
property kmip
kmip?: KMIPKMSProviderConfiguration;
Configuration options for using 'kmip' as your KMS provider
property local
local?: LocalKMSProviderConfiguration;
Configuration options for using 'local' as your KMS provider
index signature
[key: `aws:${string}`]: AWSKMSProviderConfiguration;
index signature
[key: `local:${string}`]: LocalKMSProviderConfiguration;
index signature
[key: `kmip:${string}`]: KMIPKMSProviderConfiguration;
index signature
[key: `azure:${string}`]: AzureKMSProviderConfiguration;
index signature
[key: `gcp:${string}`]: GCPKMSProviderConfiguration;
interface ListCollectionsOptions
interface ListCollectionsOptions extends Omit<CommandOperationOptions, 'writeConcern'> {}
Modifiers
@public
property authorizedCollections
authorizedCollections?: boolean;
Since 4.0: If true and nameOnly is true, allows a user without the required privilege (i.e. listCollections action on the database) to run the command when access control is enforced.
property batchSize
batchSize?: number;
The batchSize for the returned command cursor or if pre 2.8 the systems batch collection
property nameOnly
nameOnly?: boolean;
Since 4.0: If true, will only return the collection name in the response, and will omit additional info
interface ListDatabasesOptions
interface ListDatabasesOptions extends CommandOperationOptions {}
Modifiers
@public
property authorizedDatabases
authorizedDatabases?: boolean;
A flag that determines which databases are returned based on the user privileges when access control is enabled
property filter
filter?: Document;
A query predicate that determines which databases are listed
property nameOnly
nameOnly?: boolean;
A flag to indicate whether the command should return just the database names, or return both database names and size information
interface ListDatabasesResult
interface ListDatabasesResult {}
Modifiers
@public
property databases
databases: ({ name: string; sizeOnDisk?: number; empty?: boolean;} & Document)[];
property ok
ok: 1 | 0;
property totalSize
totalSize?: number;
property totalSizeMb
totalSizeMb?: number;
interface LocalKMSProviderConfiguration
interface LocalKMSProviderConfiguration {}
Modifiers
@public
property key
key: Binary | Uint8Array | string;
The master key used to encrypt/decrypt data keys. A 96-byte long Buffer or base64 encoded string.
interface ModifyResult
interface ModifyResult<TSchema = Document> {}
Modifiers
@public
property lastErrorObject
lastErrorObject?: Document;
property ok
ok: 0 | 1;
property value
value: WithId<TSchema> | null;
interface MongoClientOptions
interface MongoClientOptions extends BSONSerializeOptions, SupportedNodeConnectionOptions {}
Describes all possible URI query options for the mongo client
See Also
https://www.mongodb.com/docs/manual/reference/connection-string
Modifiers
@public
property appName
appName?: string;
The name of the application that created this MongoClient instance. MongoDB 3.4 and newer will print this value in the server log upon establishing each connection. It is also recorded in the slow query log and profile collections
property auth
auth?: Auth;
The auth settings for when connection to server.
property authMechanism
authMechanism?: AuthMechanism;
Specify the authentication mechanism that MongoDB will use to authenticate the connection.
property authMechanismProperties
authMechanismProperties?: AuthMechanismProperties;
Specify properties for the specified authMechanism as a comma-separated list of colon-separated key-value pairs.
property authSource
authSource?: string;
Specify the database name associated with the user’s credentials.
property autoEncryption
autoEncryption?: AutoEncryptionOptions;
Optionally enable in-use auto encryption
Remarks
Automatic encryption is an enterprise only feature that only applies to operations on a collection. Automatic encryption is not supported for operations on a database or view, and operations that are not bypassed will result in error (see [libmongocrypt: Auto Encryption Allow-List](https://github.com/mongodb/specifications/blob/master/source/client-side-encryption/client-side-encryption.md#libmongocrypt-auto-encryption-allow-list)). To bypass automatic encryption for all operations, set bypassAutoEncryption=true in AutoEncryptionOpts.
Automatic encryption requires the authenticated user to have the [listCollections privilege action](https://www.mongodb.com/docs/manual/reference/command/listCollections/#dbcmd.listCollections).
If a MongoClient with a limited connection pool size (i.e a non-zero maxPoolSize) is configured with AutoEncryptionOptions, a separate internal MongoClient is created if any of the following are true: - AutoEncryptionOptions.keyVaultClient is not passed. - AutoEncryptionOptions.bypassAutomaticEncryption is false.
If an internal MongoClient is created, it is configured with the same options as the parent MongoClient except minPoolSize is set to 0 and AutoEncryptionOptions is omitted.
property compressors
compressors?: CompressorName[] | string;
An array or comma-delimited string of compressors to enable network compression for communication between this client and a mongod/mongos instance.
property connectTimeoutMS
connectTimeoutMS?: number;
The time in milliseconds to attempt a connection before timing out.
property directConnection
directConnection?: boolean;
Allow a driver to force a Single topology type with a connection string containing one host
property driverInfo
driverInfo?: DriverInfo;
Allows a wrapping driver to amend the client metadata generated by the driver to include information about the wrapping driver
property forceServerObjectId
forceServerObjectId?: boolean;
Force server to assign
_id
values instead of driver
property heartbeatFrequencyMS
heartbeatFrequencyMS?: number;
heartbeatFrequencyMS controls when the driver checks the state of the MongoDB deployment. Specify the interval (in milliseconds) between checks, counted from the end of the previous check until the beginning of the next one.
property journal
journal?: boolean;
The journal write concern
Deprecated
Please use the
writeConcern
option instead
property loadBalanced
loadBalanced?: boolean;
Instruct the driver it is connecting to a load balancer fronting a mongos like service
property localThresholdMS
localThresholdMS?: number;
The size (in milliseconds) of the latency window for selecting among multiple suitable MongoDB instances.
property maxConnecting
maxConnecting?: number;
The maximum number of connections that may be in the process of being established concurrently by the connection pool.
property maxIdleTimeMS
maxIdleTimeMS?: number;
The maximum number of milliseconds that a connection can remain idle in the pool before being removed and closed.
property maxPoolSize
maxPoolSize?: number;
The maximum number of connections in the connection pool.
property maxStalenessSeconds
maxStalenessSeconds?: number;
Specifies, in seconds, how stale a secondary can be before the client stops using it for read operations.
property minHeartbeatFrequencyMS
minHeartbeatFrequencyMS?: number;
Sets the minimum heartbeat frequency. In the event that the driver has to frequently re-check a server's availability, it will wait at least this long since the previous check to avoid wasted effort.
property minPoolSize
minPoolSize?: number;
The minimum number of connections in the connection pool.
property monitorCommands
monitorCommands?: boolean;
Enable command monitoring for this client
property noDelay
noDelay?: boolean;
TCP Connection no delay
property pkFactory
pkFactory?: PkFactory;
A primary key factory function for generation of custom
_id
keys
property proxyHost
proxyHost?: string;
Configures a Socks5 proxy host used for creating TCP connections.
property proxyPassword
proxyPassword?: string;
Configures a Socks5 proxy password when the proxy in proxyHost requires username/password authentication.
property proxyPort
proxyPort?: number;
Configures a Socks5 proxy port used for creating TCP connections.
property proxyUsername
proxyUsername?: string;
Configures a Socks5 proxy username when the proxy in proxyHost requires username/password authentication.
property readConcern
readConcern?: ReadConcernLike;
Specify a read concern for the collection (only MongoDB 3.2 or higher supported)
property readConcernLevel
readConcernLevel?: ReadConcernLevel;
The level of isolation
property readPreference
readPreference?: ReadPreferenceMode | ReadPreference;
Specifies the read preferences for this connection
property readPreferenceTags
readPreferenceTags?: TagSet[];
Specifies the tags document as a comma-separated list of colon-separated key-value pairs.
property replicaSet
replicaSet?: string;
Specifies the name of the replica set, if the mongod is a member of a replica set.
property retryReads
retryReads?: boolean;
Enables retryable reads.
property retryWrites
retryWrites?: boolean;
Enable retryable writes.
property serverApi
serverApi?: ServerApi | ServerApiVersion;
Server API version
property serverMonitoringMode
serverMonitoringMode?: ServerMonitoringMode;
Instructs the driver monitors to use a specific monitoring mode
property serverSelectionTimeoutMS
serverSelectionTimeoutMS?: number;
Specifies how long (in milliseconds) to block for server selection before throwing an exception.
property socketTimeoutMS
socketTimeoutMS?: number;
The time in milliseconds to attempt a send or receive on a socket before the attempt times out.
property srvMaxHosts
srvMaxHosts?: number;
The maximum number of hosts to connect to when using an srv connection string, a setting of
0
means unlimited hosts
property srvServiceName
srvServiceName?: string;
Modifies the srv URI to look like:
_{srvServiceName}._tcp.{hostname}.{domainname}
Querying this DNS URI is expected to respond with SRV records
property ssl
ssl?: boolean;
A boolean to enable or disables TLS/SSL for the connection. (The ssl option is equivalent to the tls option.)
property timeoutMS
timeoutMS?: number;
Specifies the time an operation will run until it throws a timeout error
Modifiers
@experimental
property tls
tls?: boolean;
Enables or disables TLS/SSL for the connection.
property tlsAllowInvalidCertificates
tlsAllowInvalidCertificates?: boolean;
Bypasses validation of the certificates presented by the mongod/mongos instance
property tlsAllowInvalidHostnames
tlsAllowInvalidHostnames?: boolean;
Disables hostname validation of the certificate presented by the mongod/mongos instance.
property tlsCAFile
tlsCAFile?: string;
Specifies the location of a local .pem file that contains the root certificate chain from the Certificate Authority. This file is used to validate the certificate presented by the mongod/mongos instance.
property tlsCertificateKeyFile
tlsCertificateKeyFile?: string;
Specifies the location of a local .pem file that contains either the client's TLS/SSL certificate and key.
property tlsCertificateKeyFilePassword
tlsCertificateKeyFilePassword?: string;
Specifies the password to de-crypt the tlsCertificateKeyFile.
property tlsCRLFile
tlsCRLFile?: string;
Specifies the location of a local CRL .pem file that contains the client revokation list.
property tlsInsecure
tlsInsecure?: boolean;
Disables various certificate validations.
property w
w?: W;
The write concern w value
Deprecated
Please use the
writeConcern
option instead
property waitQueueTimeoutMS
waitQueueTimeoutMS?: number;
The maximum time in milliseconds that a thread can wait for a connection to become available.
property writeConcern
writeConcern?: WriteConcern | WriteConcernSettings;
A MongoDB WriteConcern, which describes the level of acknowledgement requested from MongoDB for write operations.
See Also
https://www.mongodb.com/docs/manual/reference/write-concern/
property wtimeoutMS
wtimeoutMS?: number;
The write concern timeout
Deprecated
Please use the
writeConcern
option instead
property zlibCompressionLevel
zlibCompressionLevel?: 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | undefined;
An integer that specifies the compression level if using zlib for network compression.
interface MongoCredentialsOptions
interface MongoCredentialsOptions {}
Modifiers
@public
interface MongoNetworkErrorOptions
interface MongoNetworkErrorOptions {}
Modifiers
@public
property beforeHandshake
beforeHandshake?: boolean;
Indicates the timeout happened before a connection handshake completed
property cause
cause?: Error;
interface MongoOptions
interface MongoOptions extends Required< Pick< MongoClientOptions, | 'autoEncryption' | 'connectTimeoutMS' | 'directConnection' | 'driverInfo' | 'forceServerObjectId' | 'minHeartbeatFrequencyMS' | 'heartbeatFrequencyMS' | 'localThresholdMS' | 'maxConnecting' | 'maxIdleTimeMS' | 'maxPoolSize' | 'minPoolSize' | 'monitorCommands' | 'noDelay' | 'pkFactory' | 'raw' | 'replicaSet' | 'retryReads' | 'retryWrites' | 'serverSelectionTimeoutMS' | 'socketTimeoutMS' | 'srvMaxHosts' | 'srvServiceName' | 'tlsAllowInvalidCertificates' | 'tlsAllowInvalidHostnames' | 'tlsInsecure' | 'waitQueueTimeoutMS' | 'zlibCompressionLevel' > >, SupportedNodeConnectionOptions {}
Parsed Mongo Client Options.
User supplied options are documented by
MongoClientOptions
.**NOTE:** The client's options parsing is subject to change to support new features. This type is provided to aid with inspection of options after parsing, it should not be relied upon programmatically.
Options are sourced from: - connection string - options object passed to the MongoClient constructor - file system (ex. tls settings) - environment variables - DNS SRV records and TXT records
Not all options may be present after client construction as some are obtained from asynchronous operations.
Modifiers
@public
property appName
appName?: string;
property compressors
compressors: CompressorName[];
property credentials
credentials?: MongoCredentials;
property dbName
dbName: string;
property directConnection
directConnection: boolean;
property hosts
hosts: HostAddress[];
property loadBalanced
loadBalanced: boolean;
property metadata
metadata: ClientMetadata;
property proxyHost
proxyHost?: string;
property proxyPassword
proxyPassword?: string;
property proxyPort
proxyPort?: number;
property proxyUsername
proxyUsername?: string;
property readConcern
readConcern: ReadConcern;
property readPreference
readPreference: ReadPreference;
property serverApi
serverApi: ServerApi;
property serverMonitoringMode
serverMonitoringMode: ServerMonitoringMode;
property srvHost
srvHost?: string;
property timeoutMS
timeoutMS?: number;
property tls
tls: boolean;
# NOTE ABOUT TLS Options
If
tls
is provided as an option, it is equivalent to setting thessl
option.NodeJS native TLS options are passed through to the socket and retain their original types.
### Additional options:
| nodejs native option | driver spec equivalent option name | driver option type | |:----------------------|:----------------------------------------------|:-------------------| |
ca
|tlsCAFile
|string
| |crl
|tlsCRLFile
|string
| |cert
|tlsCertificateKeyFile
|string
| |key
|tlsCertificateKeyFile
|string
| |passphrase
|tlsCertificateKeyFilePassword
|string
| |rejectUnauthorized
|tlsAllowInvalidCertificates
|boolean
| |checkServerIdentity
|tlsAllowInvalidHostnames
|boolean
| | see note below |tlsInsecure
|boolean
|If
tlsInsecure
is set totrue
, then it will set the node native optionscheckServerIdentity
to a no-op andrejectUnauthorized
tofalse
.If
tlsInsecure
is set tofalse
, then it will set the node native optionscheckServerIdentity
to a no-op andrejectUnauthorized
to the inverse value oftlsAllowInvalidCertificates
. IftlsAllowInvalidCertificates
is not set, thenrejectUnauthorized
will be set totrue
.### Note on
tlsCAFile
,tlsCertificateKeyFile
andtlsCRLFile
The files specified by the paths passed in to the
tlsCAFile
,tlsCertificateKeyFile
andtlsCRLFile
fields are read lazily on the first call toMongoClient.connect
. Once these files have been read and theca
,cert
,crl
andkey
fields are populated, they will not be read again on subsequent calls toMongoClient.connect
. As a result, until the first call toMongoClient.connect
, theca
,cert
,crl
andkey
fields will be undefined.
property tlsCAFile
tlsCAFile?: string;
property tlsCertificateKeyFile
tlsCertificateKeyFile?: string;
property tlsCRLFile
tlsCRLFile?: string;
property writeConcern
writeConcern: WriteConcern;
interface MonitorOptions
interface MonitorOptions extends Omit<ConnectionOptions, 'id' | 'generation' | 'hostAddress'> {}
Modifiers
@public
property connectTimeoutMS
connectTimeoutMS: number;
property heartbeatFrequencyMS
heartbeatFrequencyMS: number;
property minHeartbeatFrequencyMS
minHeartbeatFrequencyMS: number;
property serverMonitoringMode
serverMonitoringMode: ServerMonitoringMode;
interface OIDCCallbackParams
interface OIDCCallbackParams {}
The parameters that the driver provides to the user supplied human or machine callback.
The version number is used to communicate callback API changes that are not breaking but that users may want to know about and review their implementation. Users may wish to check the version number and throw an error if their expected version number and the one provided do not match.
Modifiers
@public
property idpInfo
idpInfo?: IdPInfo;
The IdP information returned from the server.
property refreshToken
refreshToken?: string;
The refresh token, if applicable, to be used by the callback to request a new token from the issuer.
property timeoutContext
timeoutContext: AbortSignal;
The context in which to timeout the OIDC callback.
property username
username?: string;
Optional username.
property version
version: 1;
The current OIDC API version.
interface OIDCResponse
interface OIDCResponse {}
The response required to be returned from the machine or human callback workflows' callback.
Modifiers
@public
property accessToken
accessToken: string;
The OIDC access token.
property expiresInSeconds
expiresInSeconds?: number;
The time when the access token expires. For future use.
property refreshToken
refreshToken?: string;
The refresh token, if applicable, to be used by the callback to request a new token from the issuer.
interface OperationOptions
interface OperationOptions extends BSONSerializeOptions {}
Modifiers
@public
property omitReadPreference
omitReadPreference?: boolean;
property readPreference
readPreference?: ReadPreferenceLike;
The preferred read preference (ReadPreference.primary, ReadPreference.primary_preferred, ReadPreference.secondary, ReadPreference.secondary_preferred, ReadPreference.nearest).
property session
session?: ClientSession;
Specify ClientSession for this command
property timeoutMS
timeoutMS?: number;
Specifies the time an operation will run until it throws a timeout error
Modifiers
@experimental
property willRetryWrite
willRetryWrite?: boolean;
interface ProxyOptions
interface ProxyOptions {}
Modifiers
@public
property proxyHost
proxyHost?: string;
property proxyPassword
proxyPassword?: string;
property proxyPort
proxyPort?: number;
property proxyUsername
proxyUsername?: string;
interface RangeOptions
interface RangeOptions {}
RangeOptions specifies index options for a Queryable Encryption field supporting "range" queries. min, max, sparsity, trimFactor and range must match the values set in the encryptedFields of the destination collection. For double and decimal128, min/max/precision must all be set, or all be unset.
Modifiers
@public
property max
max?: any;
max is the minimum value for the encrypted index. Required if precision is set.
property min
min?: any;
min is the minimum value for the encrypted index. Required if precision is set.
property precision
precision?: number;
property sparsity
sparsity?: Long | bigint;
sparsity may be used to tune performance. must be non-negative. When omitted, a default value is used.
property trimFactor
trimFactor?: Int32 | number;
trimFactor may be used to tune performance. must be non-negative. When omitted, a default value is used.
interface ReadPreferenceFromOptions
interface ReadPreferenceFromOptions extends ReadPreferenceLikeOptions {}
Modifiers
@public
property hedge
hedge?: HedgeOptions;
property readPreferenceTags
readPreferenceTags?: TagSet[];
property session
session?: ClientSession;
interface ReadPreferenceLikeOptions
interface ReadPreferenceLikeOptions extends ReadPreferenceOptions {}
Modifiers
@public
property readPreference
readPreference?: | ReadPreferenceLike | { mode?: ReadPreferenceMode; preference?: ReadPreferenceMode; tags?: TagSet[]; maxStalenessSeconds?: number; };
interface ReadPreferenceOptions
interface ReadPreferenceOptions {}
Modifiers
@public
property hedge
hedge?: HedgeOptions;
Server mode in which the same query is dispatched in parallel to multiple replica set members.
property maxStalenessSeconds
maxStalenessSeconds?: number;
Max secondary read staleness in seconds, Minimum value is 90 seconds.
interface RenameOptions
interface RenameOptions extends CommandOperationOptions {}
Modifiers
@public
property dropTarget
dropTarget?: boolean;
Drop the target name collection if it previously exists.
property new_collection
new_collection?: boolean;
Unclear
interface ReplaceOneModel
interface ReplaceOneModel<TSchema extends Document = Document> {}
Modifiers
@public
property collation
collation?: CollationOptions;
Specifies a collation.
property filter
filter: Filter<TSchema>;
The filter to limit the replaced document.
property hint
hint?: Hint;
The index to use. If specified, then the query system will only consider plans using the hinted index.
property replacement
replacement: WithoutId<TSchema>;
The document with which to replace the matched document.
property upsert
upsert?: boolean;
When true, creates a new document if no document matches the query.
interface ReplaceOptions
interface ReplaceOptions extends CommandOperationOptions {}
Modifiers
@public
property bypassDocumentValidation
bypassDocumentValidation?: boolean;
If true, allows the write to opt-out of document level validation
property collation
collation?: CollationOptions;
Specifies a collation
property hint
hint?: string | Document;
Specify that the update query should only consider plans using the hinted index
property let
let?: Document;
Map of parameter names and values that can be accessed using $$var (requires MongoDB 5.0).
property upsert
upsert?: boolean;
When true, creates a new document if no document matches the query
interface ResumeOptions
interface ResumeOptions {}
Modifiers
@public
Deprecated
Please use the ChangeStreamCursorOptions type instead.
property batchSize
batchSize?: number;
property collation
collation?: CollationOptions;
property fullDocument
fullDocument?: string;
property maxAwaitTimeMS
maxAwaitTimeMS?: number;
property readPreference
readPreference?: ReadPreference;
property resumeAfter
resumeAfter?: ResumeToken;
property startAfter
startAfter?: ResumeToken;
property startAtOperationTime
startAtOperationTime?: Timestamp;
interface RootFilterOperators
interface RootFilterOperators<TSchema> extends Document {}
Modifiers
@public
property $and
$and?: Filter<TSchema>[];
property $comment
$comment?: string | Document;
property $nor
$nor?: Filter<TSchema>[];
property $or
$or?: Filter<TSchema>[];
property $text
$text?: { $search: string; $language?: string; $caseSensitive?: boolean; $diacriticSensitive?: boolean;};
property $where
$where?: string | ((this: TSchema) => boolean);
interface SearchIndexDescription
interface SearchIndexDescription extends Document {}
Modifiers
@public
property definition
definition: Document;
The index definition.
property name
name?: string;
The name of the index.
property type
type?: string;
The type of the index. Currently
search
orvectorSearch
are supported.
interface SelectServerOptions
interface SelectServerOptions {}
Modifiers
@public
property operationName
operationName: string;
property previousServer
previousServer?: ServerDescription;
property readPreference
readPreference?: ReadPreferenceLike;
property serverSelectionTimeoutMS
serverSelectionTimeoutMS?: number;
How long to block for server selection before throwing an error
property session
session?: ClientSession;
interface ServerApi
interface ServerApi {}
Modifiers
@public
property deprecationErrors
deprecationErrors?: boolean;
property strict
strict?: boolean;
property version
version: ServerApiVersion;
interface StreamDescriptionOptions
interface StreamDescriptionOptions {}
Modifiers
@public
property compressors
compressors?: CompressorName[];
property loadBalanced
loadBalanced: boolean;
property logicalSessionTimeoutMinutes
logicalSessionTimeoutMinutes?: number;
interface TimeSeriesCollectionOptions
interface TimeSeriesCollectionOptions extends Document {}
Configuration options for timeseries collections
See Also
https://www.mongodb.com/docs/manual/core/timeseries-collections/
Modifiers
@public
property bucketMaxSpanSeconds
bucketMaxSpanSeconds?: number;
property bucketRoundingSeconds
bucketRoundingSeconds?: number;
property granularity
granularity?: 'seconds' | 'minutes' | 'hours' | string;
property metaField
metaField?: string;
property timeField
timeField: string;
interface TopologyDescriptionOptions
interface TopologyDescriptionOptions {}
Modifiers
@public
property heartbeatFrequencyMS
heartbeatFrequencyMS?: number;
property localThresholdMS
localThresholdMS?: number;
interface TopologyVersion
interface TopologyVersion {}
Modifiers
@public
interface TransactionOptions
interface TransactionOptions extends Omit<CommandOperationOptions, 'timeoutMS'> {}
Configuration options for a transaction.
Modifiers
@public
property maxCommitTimeMS
maxCommitTimeMS?: number;
Specifies the maximum amount of time to allow a commit action on a transaction to run in milliseconds
property readConcern
readConcern?: ReadConcernLike;
A default read concern for commands in this transaction
property readPreference
readPreference?: ReadPreferenceLike;
A default read preference for commands in this transaction
property writeConcern
writeConcern?: WriteConcern;
A default writeConcern for commands in this transaction
interface TypedEventEmitter
interface TypedEventEmitter<Events extends EventsDescription> extends EventEmitter {}
Typescript type safe event emitter
Modifiers
@public
method addListener
addListener: { <EventKey extends keyof Events>( event: EventKey, listener: Events[EventKey] ): this; ( event: CommonEvents, listener: (eventName: string | symbol, listener: GenericListener) => void ): this; (event: string | symbol, listener: GenericListener): this;};
method emit
emit: <EventKey extends keyof Events>( event: EventKey | symbol, ...args: Parameters<Events[EventKey]>) => boolean;
method eventNames
eventNames: () => string[];
method getMaxListeners
getMaxListeners: () => number;
method listenerCount
listenerCount: <EventKey extends keyof Events>( type: EventKey | CommonEvents | symbol | string) => number;
method listeners
listeners: <EventKey extends keyof Events>( event: EventKey | CommonEvents | symbol | string) => Events[EventKey][];
method off
off: { <EventKey extends keyof Events>( event: EventKey, listener: Events[EventKey] ): this; ( event: CommonEvents, listener: (eventName: string | symbol, listener: GenericListener) => void ): this; (event: string | symbol, listener: GenericListener): this;};
method on
on: { <EventKey extends keyof Events>( event: EventKey, listener: Events[EventKey] ): this; ( event: CommonEvents, listener: (eventName: string | symbol, listener: GenericListener) => void ): this; (event: string | symbol, listener: GenericListener): this;};
method once
once: { <EventKey extends keyof Events>( event: EventKey, listener: Events[EventKey] ): this; ( event: CommonEvents, listener: (eventName: string | symbol, listener: GenericListener) => void ): this; (event: string | symbol, listener: GenericListener): this;};
method prependListener
prependListener: { <EventKey extends keyof Events>( event: EventKey, listener: Events[EventKey] ): this; ( event: CommonEvents, listener: (eventName: string | symbol, listener: GenericListener) => void ): this; (event: string | symbol, listener: GenericListener): this;};
method prependOnceListener
prependOnceListener: { <EventKey extends keyof Events>( event: EventKey, listener: Events[EventKey] ): this; ( event: CommonEvents, listener: (eventName: string | symbol, listener: GenericListener) => void ): this; (event: string | symbol, listener: GenericListener): this;};
method rawListeners
rawListeners: <EventKey extends keyof Events>( event: EventKey | CommonEvents | symbol | string) => Events[EventKey][];
method removeAllListeners
removeAllListeners: <EventKey extends keyof Events>( event?: EventKey | CommonEvents | symbol | string) => this;
method removeListener
removeListener: { <EventKey extends keyof Events>( event: EventKey, listener: Events[EventKey] ): this; ( event: CommonEvents, listener: (eventName: string | symbol, listener: GenericListener) => void ): this; (event: string | symbol, listener: GenericListener): this;};
method setMaxListeners
setMaxListeners: (n: number) => this;
interface UpdateDescription
interface UpdateDescription<TSchema extends Document = Document> {}
Modifiers
@public
property disambiguatedPaths
disambiguatedPaths?: Document;
A document containing additional information about any ambiguous update paths from the update event. The document maps the full ambiguous update path to an array containing the actual resolved components of the path. For example, given a document shaped like
{ a: { '0': 0 } }
, and an update of{ $inc: 'a.0' }
, disambiguated paths would look like the following:{'a.0': ['a', '0']}This field is only present when there are ambiguous paths that are updated as a part of the update event and
showExpandedEvents
is enabled for the change stream. 6.1.0
property removedFields
removedFields?: string[];
An array of field names that were removed from the document.
property truncatedArrays
truncatedArrays?: Array<{ /** The name of the truncated field. */ field: string; /** The number of elements in the truncated array. */ newSize: number;}>;
An array of documents which record array truncations performed with pipeline-based updates using one or more of the following stages: - $addFields - $set - $replaceRoot - $replaceWith
property updatedFields
updatedFields?: Partial<TSchema>;
A document containing key:value pairs of names of the fields that were changed, and the new value for those fields.
interface UpdateManyModel
interface UpdateManyModel<TSchema extends Document = Document> {}
Modifiers
@public
property arrayFilters
arrayFilters?: Document[];
A set of filters specifying to which array elements an update should apply.
property collation
collation?: CollationOptions;
Specifies a collation.
property filter
filter: Filter<TSchema>;
The filter to limit the updated documents.
property hint
hint?: Hint;
The index to use. If specified, then the query system will only consider plans using the hinted index.
property update
update: UpdateFilter<TSchema> | Document[];
The modifications to apply. The value can be either: UpdateFilter - A document that contains update operator expressions, Document[] - an aggregation pipeline.
property upsert
upsert?: boolean;
When true, creates a new document if no document matches the query.
interface UpdateOneModel
interface UpdateOneModel<TSchema extends Document = Document> {}
Modifiers
@public
property arrayFilters
arrayFilters?: Document[];
A set of filters specifying to which array elements an update should apply.
property collation
collation?: CollationOptions;
Specifies a collation.
property filter
filter: Filter<TSchema>;
The filter to limit the updated documents.
property hint
hint?: Hint;
The index to use. If specified, then the query system will only consider plans using the hinted index.
property update
update: UpdateFilter<TSchema> | Document[];
The modifications to apply. The value can be either: UpdateFilter - A document that contains update operator expressions, Document[] - an aggregation pipeline.
property upsert
upsert?: boolean;
When true, creates a new document if no document matches the query.
interface UpdateOptions
interface UpdateOptions extends CommandOperationOptions {}
Modifiers
@public
property arrayFilters
arrayFilters?: Document[];
A set of filters specifying to which array elements an update should apply
property bypassDocumentValidation
bypassDocumentValidation?: boolean;
If true, allows the write to opt-out of document level validation
property collation
collation?: CollationOptions;
Specifies a collation
property hint
hint?: Hint;
Specify that the update query should only consider plans using the hinted index
property let
let?: Document;
Map of parameter names and values that can be accessed using $$var (requires MongoDB 5.0).
property upsert
upsert?: boolean;
When true, creates a new document if no document matches the query
interface UpdateResult
interface UpdateResult<TSchema extends Document = Document> {}
TSchema
is the schema of the collectionModifiers
@public
property acknowledged
acknowledged: boolean;
Indicates whether this write result was acknowledged. If not, then all other members of this result will be undefined
property matchedCount
matchedCount: number;
The number of documents that matched the filter
property modifiedCount
modifiedCount: number;
The number of documents that were modified
property upsertedCount
upsertedCount: number;
The number of documents that were upserted
property upsertedId
upsertedId: InferIdType<TSchema> | null;
The identifier of the inserted document if an upsert took place
interface UpdateStatement
interface UpdateStatement {}
Modifiers
@public
property arrayFilters
arrayFilters?: Document[];
An array of filter documents that determines which array elements to modify for an update operation on an array field.
property collation
collation?: CollationOptions;
Specifies the collation to use for the operation.
property hint
hint?: Hint;
A document or string that specifies the index to use to support the query predicate.
property multi
multi?: boolean;
If true, updates all documents that meet the query criteria.
property q
q: Document;
The query that matches documents to update.
property u
u: Document | Document[];
The modifications to apply.
property upsert
upsert?: boolean;
If true, perform an insert if no documents match the query.
interface ValidateCollectionOptions
interface ValidateCollectionOptions extends CommandOperationOptions {}
Modifiers
@public
property background
background?: boolean;
Validates a collection in the background, without interrupting read or write traffic (only in MongoDB 4.4+)
interface WriteConcernErrorData
interface WriteConcernErrorData {}
Modifiers
@public
interface WriteConcernErrorResult
interface WriteConcernErrorResult {}
The type of the result property of MongoWriteConcernError
Modifiers
@public
property code
code?: number;
property errorLabels
errorLabels?: string[];
property ok
ok: number;
property writeConcernError
writeConcernError: { code: number; errmsg: string; codeName?: string; errInfo?: Document;};
index signature
[x: string | number]: unknown;
interface WriteConcernOptions
interface WriteConcernOptions {}
Modifiers
@public
property writeConcern
writeConcern?: WriteConcern | WriteConcernSettings;
Write Concern as an object
interface WriteConcernSettings
interface WriteConcernSettings {}
Modifiers
@public
property fsync
fsync?: boolean | 1;
The file sync write concern.
Deprecated
Will be removed in the next major version. Please use the journal option.
property j
j?: boolean;
The journal write concern.
Deprecated
Will be removed in the next major version. Please use the journal option.
property journal
journal?: boolean;
The journal write concern
property w
w?: W;
The write concern
property wtimeout
wtimeout?: number;
The write concern timeout.
property wtimeoutMS
wtimeoutMS?: number;
The write concern timeout.
Type Aliases
type AbstractCursorEvents
type AbstractCursorEvents = { [AbstractCursor.CLOSE](): void;};
Modifiers
@public
type AcceptedFields
type AcceptedFields<TSchema, FieldType, AssignableType> = { readonly [key in KeysOfAType<TSchema, FieldType>]?: AssignableType;};
Modifiers
@public
type AddToSetOperators
type AddToSetOperators<Type> = { $each?: Array<Flatten<Type>>;};
Modifiers
@public
type AlternativeType
type AlternativeType<T> = T extends ReadonlyArray<infer U> ? T | RegExpOrString<U> : RegExpOrString<T>;
It is possible to search using alternative types in mongodb e.g. string types can be searched using a regex in mongo array types can be searched using their element type
Modifiers
@public
type AnyBulkWriteOperation
type AnyBulkWriteOperation<TSchema extends Document = Document> = | { insertOne: InsertOneModel<TSchema>; } | { replaceOne: ReplaceOneModel<TSchema>; } | { updateOne: UpdateOneModel<TSchema>; } | { updateMany: UpdateManyModel<TSchema>; } | { deleteOne: DeleteOneModel<TSchema>; } | { deleteMany: DeleteManyModel<TSchema>; };
Modifiers
@public
type AnyClientBulkWriteModel
type AnyClientBulkWriteModel<TSchema extends Document> = | ClientInsertOneModel<TSchema> | ClientReplaceOneModel<TSchema> | ClientUpdateOneModel<TSchema> | ClientUpdateManyModel<TSchema> | ClientDeleteOneModel<TSchema> | ClientDeleteManyModel<TSchema>;
Used to represent any of the client bulk write models that can be passed as an array to MongoClient#bulkWrite.
Modifiers
@public
type AnyError
type AnyError = MongoError | Error;
Modifiers
@public
type ArrayElement
type ArrayElement<Type> = Type extends ReadonlyArray<infer Item> ? Item : never;
Modifiers
@public
type ArrayOperator
type ArrayOperator<Type> = { $each?: Array<Flatten<Type>>; $slice?: number; $position?: number; $sort?: Sort;};
Modifiers
@public
type AuthMechanism
type AuthMechanism = (typeof AuthMechanism)[keyof typeof AuthMechanism];
Modifiers
@public
type AutoEncryptionExtraOptions
type AutoEncryptionExtraOptions = NonNullable<AutoEncryptionOptions['extraOptions']>;
Extra options related to the mongocryptd process * _Available in MongoDB 6.0 or higher._
Modifiers
@public
type AutoEncryptionLoggerLevel
type AutoEncryptionLoggerLevel = (typeof AutoEncryptionLoggerLevel)[keyof typeof AutoEncryptionLoggerLevel];
The level of severity of the log message
| Value | Level | |-------|-------| | 0 | Fatal Error | | 1 | Error | | 2 | Warning | | 3 | Info | | 4 | Trace |
Modifiers
@public
type AzureKMSProviderConfiguration
type AzureKMSProviderConfiguration = | { /** * The tenant ID identifies the organization for the account */ tenantId: string; /** * The client ID to authenticate a registered application */ clientId: string; /** * The client secret to authenticate a registered application */ clientSecret: string; /** * If present, a host with optional port. E.g. "example.com" or "example.com:443". * This is optional, and only needed if customer is using a non-commercial Azure instance * (e.g. a government or China account, which use different URLs). * Defaults to "login.microsoftonline.com" */ identityPlatformEndpoint?: string | undefined; } | { /** * If present, an access token to authenticate with Azure. */ accessToken: string; };
Modifiers
@public
type BatchType
type BatchType = (typeof BatchType)[keyof typeof BatchType];
Modifiers
@public
type BitwiseFilter
type BitwiseFilter = | number /** numeric bit mask */ | Binary /** BinData bit mask */ | ReadonlyArray<number>;
Modifiers
@public
type BSONTypeAlias
type BSONTypeAlias = keyof typeof BSONType;
Modifiers
@public
type Callback
type Callback<T = any> = (error?: AnyError, result?: T) => void;
MongoDB Driver style callback
Modifiers
@public
type ChangeStreamDocument
type ChangeStreamDocument<TSchema extends Document = Document> = | ChangeStreamInsertDocument<TSchema> | ChangeStreamUpdateDocument<TSchema> | ChangeStreamReplaceDocument<TSchema> | ChangeStreamDeleteDocument<TSchema> | ChangeStreamDropDocument | ChangeStreamRenameDocument | ChangeStreamDropDatabaseDocument | ChangeStreamInvalidateDocument | ChangeStreamCreateIndexDocument | ChangeStreamCreateDocument | ChangeStreamCollModDocument | ChangeStreamDropIndexDocument | ChangeStreamShardCollectionDocument | ChangeStreamReshardCollectionDocument | ChangeStreamRefineCollectionShardKeyDocument;
Modifiers
@public
type ChangeStreamEvents
type ChangeStreamEvents< TSchema extends Document = Document, TChange extends Document = ChangeStreamDocument<TSchema>> = { resumeTokenChanged(token: ResumeToken): void; init(response: any): void; more(response?: any): void; response(): void; end(): void; error(error: Error): void; change(change: TChange): void; /** * @remarks Note that the `close` event is currently emitted whenever the internal `ChangeStreamCursor` * instance is closed, which can occur multiple times for a given `ChangeStream` instance. * * TODO(NODE-6434): address this issue in NODE-6434 */ close(): void;};
Modifiers
@public
type ClientBulkWriteModel
type ClientBulkWriteModel< SchemaMap extends Record<string, Document> = Record<string, Document>> = { [Namespace in keyof SchemaMap]: AnyClientBulkWriteModel<SchemaMap[Namespace]> & { namespace: Namespace; };}[keyof SchemaMap];
A mapping of namespace strings to collections schemas.
Example 1
type MongoDBSchemas = {'db.books': Book;'db.authors': Author;}const model: ClientBulkWriteModel<MongoDBSchemas> = {namespace: 'db.books'name: 'insertOne',document: { title: 'Practical MongoDB Aggregations', authorName: 3 } // error `authorName` cannot be number};The type of the
namespace
field narrows other parts of the BulkWriteModel to use the correct schema for type assertions.Modifiers
@public
type ClientEncryptionDataKeyProvider
type ClientEncryptionDataKeyProvider = keyof KMSProviders;
A data key provider. Allowed values:
- aws, gcp, local, kmip or azure - (
mongodb-client-encryption>=6.0.1
only) a named key, in the form of:aws:<name>
,gcp:<name>
,local:<name>
,kmip:<name>
,azure:<name>
wherename
is an alphanumeric string, underscores allowed.Modifiers
@public
type ClientEncryptionSocketOptions
type ClientEncryptionSocketOptions = Pick< MongoClientOptions, 'autoSelectFamily' | 'autoSelectFamilyAttemptTimeout'>;
Socket options to use for KMS requests.
Modifiers
@public
type ClientEncryptionTlsOptions
type ClientEncryptionTlsOptions = Pick< MongoClientOptions, 'tlsCAFile' | 'tlsCertificateKeyFile' | 'tlsCertificateKeyFilePassword'>;
TLS options to use when connecting. The spec specifically calls out which insecure tls options are not allowed:
- tlsAllowInvalidCertificates - tlsAllowInvalidHostnames - tlsInsecure
These options are not included in the type, and are ignored if provided.
Modifiers
@public
type ClientSessionEvents
type ClientSessionEvents = { ended(session: ClientSession): void;};
Modifiers
@public
type CommonEvents
type CommonEvents = 'newListener' | 'removeListener';
Modifiers
@public
type Compressor
type Compressor = (typeof Compressor)[CompressorName];
Modifiers
@public
type CompressorName
type CompressorName = keyof typeof Compressor;
Modifiers
@public
type Condition
type Condition<T> = AlternativeType<T> | FilterOperators<AlternativeType<T>>;
Modifiers
@public
type ConnectionEvents
type ConnectionEvents = { commandStarted(event: CommandStartedEvent): void; commandSucceeded(event: CommandSucceededEvent): void; commandFailed(event: CommandFailedEvent): void; clusterTimeReceived(clusterTime: Document): void; close(): void; pinned(pinType: string): void; unpinned(pinType: string): void;};
Modifiers
@public
type ConnectionPoolEvents
type ConnectionPoolEvents = { connectionPoolCreated(event: ConnectionPoolCreatedEvent): void; connectionPoolReady(event: ConnectionPoolReadyEvent): void; connectionPoolClosed(event: ConnectionPoolClosedEvent): void; connectionPoolCleared(event: ConnectionPoolClearedEvent): void; connectionCreated(event: ConnectionCreatedEvent): void; connectionReady(event: ConnectionReadyEvent): void; connectionClosed(event: ConnectionClosedEvent): void; connectionCheckOutStarted(event: ConnectionCheckOutStartedEvent): void; connectionCheckOutFailed(event: ConnectionCheckOutFailedEvent): void; connectionCheckedOut(event: ConnectionCheckedOutEvent): void; connectionCheckedIn(event: ConnectionCheckedInEvent): void;} & Omit<ConnectionEvents, 'close' | 'message'>;
Modifiers
@public
type CSFLEKMSTlsOptions
type CSFLEKMSTlsOptions = { aws?: ClientEncryptionTlsOptions; gcp?: ClientEncryptionTlsOptions; kmip?: ClientEncryptionTlsOptions; local?: ClientEncryptionTlsOptions; azure?: ClientEncryptionTlsOptions; [key: string]: ClientEncryptionTlsOptions | undefined;};
Modifiers
@public
type CursorFlag
type CursorFlag = (typeof CURSOR_FLAGS)[number];
Modifiers
@public
type CursorTimeoutMode
type CursorTimeoutMode = (typeof CursorTimeoutMode)[keyof typeof CursorTimeoutMode];
Modifiers
@public
@experimental
type DistinctOptions
type DistinctOptions = CommandOperationOptions;
Modifiers
@public
type DropDatabaseOptions
type DropDatabaseOptions = CommandOperationOptions;
Modifiers
@public
type DropIndexesOptions
type DropIndexesOptions = CommandOperationOptions;
Modifiers
@public
type EnhancedOmit
type EnhancedOmit<TRecordOrUnion, KeyUnion> = string extends keyof TRecordOrUnion ? TRecordOrUnion : TRecordOrUnion extends any ? Pick<TRecordOrUnion, Exclude<keyof TRecordOrUnion, KeyUnion>> : never;
TypeScript Omit (Exclude to be specific) does not work for objects with an "any" indexed type, and breaks discriminated unions
Modifiers
@public
type EventEmitterWithState
type EventEmitterWithState = { /* Excluded from this release type: stateChanged */};
Modifiers
@public
type EventsDescription
type EventsDescription = Record<string, GenericListener>;
Event description type
Modifiers
@public
type ExplainVerbosity
type ExplainVerbosity = string;
Modifiers
@public
type ExplainVerbosityLike
type ExplainVerbosityLike = ExplainVerbosity | boolean;
For backwards compatibility, true is interpreted as "allPlansExecution" and false as "queryPlanner".
Modifiers
@public
type Filter
type Filter<TSchema> = { [P in keyof WithId<TSchema>]?: Condition<WithId<TSchema>[P]>;} & RootFilterOperators<WithId<TSchema>>;
A MongoDB filter can be some portion of the schema or a set of operators
Modifiers
@public
type FilterOperations
type FilterOperations<T> = T extends Record<string, any> ? { [key in keyof T]?: FilterOperators<T[key]>; } : FilterOperators<T>;
Modifiers
@public
type Flatten
type Flatten<Type> = Type extends ReadonlyArray<infer Item> ? Item : Type;
Modifiers
@public
type GCPKMSProviderConfiguration
type GCPKMSProviderConfiguration = | { /** * The service account email to authenticate */ email: string; /** * A PKCS#8 encrypted key. This can either be a base64 string or a binary representation */ privateKey: string | Buffer; /** * If present, a host with optional port. E.g. "example.com" or "example.com:443". * Defaults to "oauth2.googleapis.com" */ endpoint?: string | undefined; } | { /** * If present, an access token to authenticate with GCP. */ accessToken: string; };
Modifiers
@public
type GenericListener
type GenericListener = (...args: any[]) => void;
Modifiers
@public
type GridFSBucketEvents
type GridFSBucketEvents = { index(): void;};
Modifiers
@public
type GSSAPICanonicalizationValue
type GSSAPICanonicalizationValue = (typeof GSSAPICanonicalizationValue)[keyof typeof GSSAPICanonicalizationValue];
Modifiers
@public
type Hint
type Hint = string | Document;
Modifiers
@public
type IndexDescriptionCompact
type IndexDescriptionCompact = Record< string, [name: string, direction: IndexDirection][]>;
Modifiers
@public
type IndexDescriptionInfo
type IndexDescriptionInfo = Omit<IndexDescription, 'key' | 'version'> & { key: { [key: string]: IndexDirection; }; v?: IndexDescription['version'];} & Document;
The index information returned by the listIndexes command. https://www.mongodb.com/docs/manual/reference/command/listIndexes/#mongodb-dbcommand-dbcmd.listIndexes
Modifiers
@public
type IndexDirection
type IndexDirection = | -1 | 1 | '2d' | '2dsphere' | 'text' | 'geoHaystack' | 'hashed' | number;
Modifiers
@public
type IndexSpecification
type IndexSpecification = OneOrMore< | string | [string, IndexDirection] | { [key: string]: IndexDirection; } | Map<string, IndexDirection>>;
Modifiers
@public
type InferIdType
type InferIdType<TSchema> = TSchema extends { _id: infer IdType;} ? Record<any, never> extends IdType ? never : IdType : TSchema extends { _id?: infer IdType; } ? unknown extends IdType ? ObjectId : IdType : ObjectId;
Given an object shaped type, return the type of the _id field or default to ObjectId
Modifiers
@public
type IntegerType
type IntegerType = number | Int32 | Long | bigint;
Modifiers
@public
type IsAny
type IsAny<Type, ResultIfAny, ResultIfNotAny> = true extends false & Type ? ResultIfAny : ResultIfNotAny;
Modifiers
@public
type Join
type Join<T extends unknown[], D extends string> = T extends [] ? '' : T extends [string | number] ? `${T[0]}` : T extends [string | number, ...infer R] ? `${T[0]}${D}${Join<R, D>}` : string;
Modifiers
@public
type KeysOfAType
type KeysOfAType<TSchema, Type> = { [key in keyof TSchema]: NonNullable<TSchema[key]> extends Type ? key : never;}[keyof TSchema];
Modifiers
@public
type KeysOfOtherType
type KeysOfOtherType<TSchema, Type> = { [key in keyof TSchema]: NonNullable<TSchema[key]> extends Type ? never : key;}[keyof TSchema];
Modifiers
@public
type ListIndexesOptions
type ListIndexesOptions = AbstractCursorOptions & { /* Excluded from this release type: omitMaxTimeMS */};
Modifiers
@public
type ListSearchIndexesOptions
type ListSearchIndexesOptions = Omit< AggregateOptions, 'readConcern' | 'writeConcern'>;
Modifiers
@public
type MatchKeysAndValues
type MatchKeysAndValues<TSchema> = Readonly<Partial<TSchema>> & Record<string, any>;
Modifiers
@public
type MongoClientEvents
type MongoClientEvents = Pick< TopologyEvents, (typeof MONGO_CLIENT_EVENTS)[number]> & { open(mongoClient: MongoClient): void;};
Modifiers
@public
type MongoErrorLabel
type MongoErrorLabel = (typeof MongoErrorLabel)[keyof typeof MongoErrorLabel];
Modifiers
@public
type MonitorEvents
type MonitorEvents = { serverHeartbeatStarted(event: ServerHeartbeatStartedEvent): void; serverHeartbeatSucceeded(event: ServerHeartbeatSucceededEvent): void; serverHeartbeatFailed(event: ServerHeartbeatFailedEvent): void; resetServer(error?: MongoError): void; resetConnectionPool(): void; close(): void;} & EventEmitterWithState;
Modifiers
@public
type NestedPaths
type NestedPaths<Type, Depth extends number[]> = Depth['length'] extends 8 ? [] : Type extends | string | number | bigint | boolean | Date | RegExp | Buffer | Uint8Array | ((...args: any[]) => any) | { _bsontype: string; } ? [] : Type extends ReadonlyArray<infer ArrayType> ? [] | [number, ...NestedPaths<ArrayType, [...Depth, 1]>] : Type extends Map<string, any> ? [string] : Type extends object ? { [Key in Extract<keyof Type, string>]: Type[Key] extends Type ? [Key] : Type extends Type[Key] ? [Key] : Type[Key] extends ReadonlyArray<infer ArrayType> ? Type extends ArrayType ? [Key] : ArrayType extends Type ? [Key] : [Key, ...NestedPaths<Type[Key], [...Depth, 1]>] // child is not structured the same as the parent : [Key, ...NestedPaths<Type[Key], [...Depth, 1]>] | [Key]; }[Extract<keyof Type, string>] : [];
returns tuple of strings (keys to be joined on '.') that represent every path into a schema https://www.mongodb.com/docs/manual/tutorial/query-embedded-documents/
Remarks
Through testing we determined that a depth of 8 is safe for the typescript compiler and provides reasonable compilation times. This number is otherwise not special and should be changed if issues are found with this level of checking. Beyond this depth any helpers that make use of NestedPaths should devolve to not asserting any type safety on the input.
Modifiers
@public
type NestedPathsOfType
type NestedPathsOfType<TSchema, Type> = KeysOfAType< { [Property in Join<NestedPaths<TSchema, []>, '.'>]: PropertyType< TSchema, Property >; }, Type>;
returns keys (strings) for every path into a schema with a value of type https://www.mongodb.com/docs/manual/tutorial/query-embedded-documents/
Modifiers
@public
type NonObjectIdLikeDocument
type NonObjectIdLikeDocument = { [key in keyof ObjectIdLike]?: never;} & Document;
A type that extends Document but forbids anything that "looks like" an object id.
Modifiers
@public
type NotAcceptedFields
type NotAcceptedFields<TSchema, FieldType> = { readonly [key in KeysOfOtherType<TSchema, FieldType>]?: never;};
It avoids using fields with not acceptable types
Modifiers
@public
type NumericType
type NumericType = IntegerType | Decimal128 | Double;
Modifiers
@public
type OIDCCallbackFunction
type OIDCCallbackFunction = (params: OIDCCallbackParams) => Promise<OIDCResponse>;
The signature of the human or machine callback functions.
Modifiers
@public
type OneOrMore
type OneOrMore<T> = T | ReadonlyArray<T>;
Modifiers
@public
type OnlyFieldsOfType
type OnlyFieldsOfType<TSchema, FieldType = any, AssignableType = FieldType> = IsAny< TSchema[keyof TSchema], AssignableType extends FieldType ? Record<string, FieldType> : Record<string, AssignableType>, AcceptedFields<TSchema, FieldType, AssignableType> & NotAcceptedFields<TSchema, FieldType> & Record<string, AssignableType>>;
Modifiers
@public
type OperationTime
type OperationTime = Timestamp;
Represents a specific point in time on a server. Can be retrieved by using
db.command()
See Also
https://www.mongodb.com/docs/manual/reference/method/db.runCommand/#response
Modifiers
@public
type OptionalId
type OptionalId<TSchema> = EnhancedOmit<TSchema, '_id'> & { _id?: InferIdType<TSchema>;};
Add an optional _id field to an object shaped type
Modifiers
@public
type OptionalUnlessRequiredId
type OptionalUnlessRequiredId<TSchema> = TSchema extends { _id: any;} ? TSchema : OptionalId<TSchema>;
Adds an optional _id field to an object shaped type, unless the _id field is required on that type. In the case _id is required, this method continues to require_id.
Modifiers
@public
type ProfilingLevel
type ProfilingLevel = (typeof ProfilingLevel)[keyof typeof ProfilingLevel];
Modifiers
@public
type ProfilingLevelOptions
type ProfilingLevelOptions = CommandOperationOptions;
Modifiers
@public
type PropertyType
type PropertyType<Type, Property extends string> = string extends Property ? unknown : Property extends keyof Type ? Type[Property] : Property extends `${number}` ? Type extends ReadonlyArray<infer ArrayType> ? ArrayType : unknown : Property extends `${infer Key}.${infer Rest}` ? Key extends `${number}` ? Type extends ReadonlyArray<infer ArrayType> ? PropertyType<ArrayType, Rest> : unknown : Key extends keyof Type ? Type[Key] extends Map<string, infer MapType> ? MapType : PropertyType<Type[Key], Rest> : unknown : unknown;
Modifiers
@public
type PullAllOperator
type PullAllOperator<TSchema> = ({ readonly [key in KeysOfAType<TSchema, ReadonlyArray<any>>]?: TSchema[key];} & NotAcceptedFields<TSchema, ReadonlyArray<any>>) & { readonly [key: string]: ReadonlyArray<any>;};
Modifiers
@public
type PullOperator
type PullOperator<TSchema> = ({ readonly [key in KeysOfAType<TSchema, ReadonlyArray<any>>]?: | Partial<Flatten<TSchema[key]>> | FilterOperations<Flatten<TSchema[key]>>;} & NotAcceptedFields<TSchema, ReadonlyArray<any>>) & { readonly [key: string]: FilterOperators<any> | any;};
Modifiers
@public
type PushOperator
type PushOperator<TSchema> = ({ readonly [key in KeysOfAType<TSchema, ReadonlyArray<any>>]?: | Flatten<TSchema[key]> | ArrayOperator<Array<Flatten<TSchema[key]>>>;} & NotAcceptedFields<TSchema, ReadonlyArray<any>>) & { readonly [key: string]: ArrayOperator<any> | any;};
Modifiers
@public
type ReadConcernLevel
type ReadConcernLevel = (typeof ReadConcernLevel)[keyof typeof ReadConcernLevel];
Modifiers
@public
type ReadConcernLike
type ReadConcernLike = | ReadConcern | { level: ReadConcernLevel; } | ReadConcernLevel;
Modifiers
@public
type ReadPreferenceLike
type ReadPreferenceLike = ReadPreference | ReadPreferenceMode;
Modifiers
@public
type ReadPreferenceMode
type ReadPreferenceMode = (typeof ReadPreferenceMode)[keyof typeof ReadPreferenceMode];
Modifiers
@public
type RegExpOrString
type RegExpOrString<T> = T extends string ? BSONRegExp | RegExp | T : T;
Modifiers
@public
type RemoveUserOptions
type RemoveUserOptions = CommandOperationOptions;
Modifiers
@public
type ResumeToken
type ResumeToken = unknown;
Represents the logical starting point for a new ChangeStream or resuming a ChangeStream on the server.
See Also
https://www.mongodb.com/docs/manual/changeStreams/#std-label-change-stream-resume
Modifiers
@public
type ReturnDocument
type ReturnDocument = (typeof ReturnDocument)[keyof typeof ReturnDocument];
Modifiers
@public
type RunCommandOptions
type RunCommandOptions = { /** Specify ClientSession for this command */ session?: ClientSession; /** The read preference */ readPreference?: ReadPreferenceLike; /** * @experimental * Specifies the time an operation will run until it throws a timeout error */ timeoutMS?: number; /* Excluded from this release type: omitMaxTimeMS */} & BSONSerializeOptions;
Modifiers
@public
type RunCursorCommandOptions
type RunCursorCommandOptions = { readPreference?: ReadPreferenceLike; session?: ClientSession; /** * @experimental * Specifies the time an operation will run until it throws a timeout error. Note that if * `maxTimeMS` is provided in the command in addition to setting `timeoutMS` in the options, then * the original value of `maxTimeMS` will be overwritten. */ timeoutMS?: number; /** * @public * @experimental * Specifies how `timeoutMS` is applied to the cursor. Can be either `'cursorLifeTime'` or `'iteration'` * When set to `'iteration'`, the deadline specified by `timeoutMS` applies to each call of * `cursor.next()`. * When set to `'cursorLifetime'`, the deadline applies to the life of the entire cursor. * * Depending on the type of cursor being used, this option has different default values. * For non-tailable cursors, this value defaults to `'cursorLifetime'` * For tailable cursors, this value defaults to `'iteration'` since tailable cursors, by * definition can have an arbitrarily long lifetime. * * @example * ```ts * const cursor = collection.find({}, {timeoutMS: 100, timeoutMode: 'iteration'}); * for await (const doc of cursor) { * // process doc * // This will throw a timeout error if any of the iterator's `next()` calls takes more than 100ms, but * // will continue to iterate successfully otherwise, regardless of the number of batches. * } * ``` * * @example * ```ts * const cursor = collection.find({}, { timeoutMS: 1000, timeoutMode: 'cursorLifetime' }); * const docs = await cursor.toArray(); // This entire line will throw a timeout error if all batches are not fetched and returned within 1000ms. * ``` */ timeoutMode?: CursorTimeoutMode; tailable?: boolean; awaitData?: boolean;} & BSONSerializeOptions;
Modifiers
@public
type SchemaMember
type SchemaMember<T, V> = | { [P in keyof T]?: V; } | { [key: string]: V; };
Modifiers
@public
type ServerApiVersion
type ServerApiVersion = (typeof ServerApiVersion)[keyof typeof ServerApiVersion];
Modifiers
@public
type ServerEvents
type ServerEvents = { serverHeartbeatStarted(event: ServerHeartbeatStartedEvent): void; serverHeartbeatSucceeded(event: ServerHeartbeatSucceededEvent): void; serverHeartbeatFailed(event: ServerHeartbeatFailedEvent): void; /* Excluded from this release type: connect */ descriptionReceived(description: ServerDescription): void; closed(): void; ended(): void;} & ConnectionPoolEvents & EventEmitterWithState;
Modifiers
@public
type ServerMonitoringMode
type ServerMonitoringMode = (typeof ServerMonitoringMode)[keyof typeof ServerMonitoringMode];
Modifiers
@public
type ServerSessionId
type ServerSessionId = { id: Binary;};
Modifiers
@public
type ServerType
type ServerType = (typeof ServerType)[keyof typeof ServerType];
Modifiers
@public
type SetFields
type SetFields<TSchema> = ({ readonly [key in KeysOfAType<TSchema, ReadonlyArray<any> | undefined>]?: | OptionalId<Flatten<TSchema[key]>> | AddToSetOperators<Array<OptionalId<Flatten<TSchema[key]>>>>;} & IsAny< TSchema[keyof TSchema], object, NotAcceptedFields<TSchema, ReadonlyArray<any> | undefined>>) & { readonly [key: string]: AddToSetOperators<any> | any;};
Modifiers
@public
type SetProfilingLevelOptions
type SetProfilingLevelOptions = CommandOperationOptions;
Modifiers
@public
type Sort
type Sort = | string | Exclude< SortDirection, { $meta: string; } > | string[] | { [key: string]: SortDirection; } | Map<string, SortDirection> | [string, SortDirection][] | [string, SortDirection];
Modifiers
@public
type SortDirection
type SortDirection = | 1 | -1 | 'asc' | 'desc' | 'ascending' | 'descending' | { $meta: string; };
Modifiers
@public
type Stream
type Stream = Socket | TLSSocket;
Modifiers
@public
type StrictFilter
type StrictFilter<TSchema> = | Partial<TSchema> | ({ [Property in Join<NestedPaths<WithId<TSchema>, []>, '.'>]?: Condition< PropertyType<WithId<TSchema>, Property> >; } & RootFilterOperators<WithId<TSchema>>);
Modifiers
@public
@experimental
type StrictMatchKeysAndValues
type StrictMatchKeysAndValues<TSchema> = Readonly< { [Property in Join<NestedPaths<TSchema, []>, '.'>]?: PropertyType< TSchema, Property >; } & { [Property in `${NestedPathsOfType<TSchema, any[]>}.$${ | `[${string}]` | ''}`]?: ArrayElement< PropertyType< TSchema, Property extends `${infer Key}.$${string}` ? Key : never > >; } & { [Property in `${NestedPathsOfType<TSchema, Record<string, any>[]>}.$${ | `[${string}]` | ''}.${string}`]?: any; } & Document>;
Modifiers
@public
@experimental
type StrictUpdateFilter
type StrictUpdateFilter<TSchema> = { $currentDate?: OnlyFieldsOfType< TSchema, Date | Timestamp, | true | { $type: 'date' | 'timestamp'; } >; $inc?: OnlyFieldsOfType<TSchema, NumericType | undefined>; $min?: StrictMatchKeysAndValues<TSchema>; $max?: StrictMatchKeysAndValues<TSchema>; $mul?: OnlyFieldsOfType<TSchema, NumericType | undefined>; $rename?: Record<string, string>; $set?: StrictMatchKeysAndValues<TSchema>; $setOnInsert?: StrictMatchKeysAndValues<TSchema>; $unset?: OnlyFieldsOfType<TSchema, any, '' | true | 1>; $addToSet?: SetFields<TSchema>; $pop?: OnlyFieldsOfType<TSchema, ReadonlyArray<any>, 1 | -1>; $pull?: PullOperator<TSchema>; $push?: PushOperator<TSchema>; $pullAll?: PullAllOperator<TSchema>; $bit?: OnlyFieldsOfType< TSchema, NumericType | undefined, | { and: IntegerType; } | { or: IntegerType; } | { xor: IntegerType; } >;} & Document;
Modifiers
@public
@experimental
type SupportedNodeConnectionOptions
type SupportedNodeConnectionOptions = SupportedTLSConnectionOptions & SupportedTLSSocketOptions & SupportedSocketOptions;
Modifiers
@public
type SupportedSocketOptions
type SupportedSocketOptions = Pick< TcpNetConnectOpts & { autoSelectFamily?: boolean; autoSelectFamilyAttemptTimeout?: number; }, (typeof LEGAL_TCP_SOCKET_OPTIONS)[number]>;
Modifiers
@public
type SupportedTLSConnectionOptions
type SupportedTLSConnectionOptions = Pick< ConnectionOptions_2 & { allowPartialTrustChain?: boolean; }, (typeof LEGAL_TLS_SOCKET_OPTIONS)[number]>;
Modifiers
@public
type SupportedTLSSocketOptions
type SupportedTLSSocketOptions = Pick< TLSSocketOptions, Extract<keyof TLSSocketOptions, (typeof LEGAL_TLS_SOCKET_OPTIONS)[number]>>;
Modifiers
@public
type TagSet
type TagSet = { [key: string]: string;};
Modifiers
@public
type TopologyEvents
type TopologyEvents = { /* Excluded from this release type: connect */ serverOpening(event: ServerOpeningEvent): void; serverClosed(event: ServerClosedEvent): void; serverDescriptionChanged(event: ServerDescriptionChangedEvent): void; topologyClosed(event: TopologyClosedEvent): void; topologyOpening(event: TopologyOpeningEvent): void; topologyDescriptionChanged(event: TopologyDescriptionChangedEvent): void; error(error: Error): void; /* Excluded from this release type: open */ close(): void; timeout(): void;} & Omit<ServerEvents, 'connect'> & ConnectionPoolEvents & ConnectionEvents & EventEmitterWithState;
Modifiers
@public
type TopologyType
type TopologyType = (typeof TopologyType)[keyof typeof TopologyType];
Modifiers
@public
type UpdateFilter
type UpdateFilter<TSchema> = { $currentDate?: OnlyFieldsOfType< TSchema, Date | Timestamp, | true | { $type: 'date' | 'timestamp'; } >; $inc?: OnlyFieldsOfType<TSchema, NumericType | undefined>; $min?: MatchKeysAndValues<TSchema>; $max?: MatchKeysAndValues<TSchema>; $mul?: OnlyFieldsOfType<TSchema, NumericType | undefined>; $rename?: Record<string, string>; $set?: MatchKeysAndValues<TSchema>; $setOnInsert?: MatchKeysAndValues<TSchema>; $unset?: OnlyFieldsOfType<TSchema, any, '' | true | 1>; $addToSet?: SetFields<TSchema>; $pop?: OnlyFieldsOfType<TSchema, ReadonlyArray<any>, 1 | -1>; $pull?: PullOperator<TSchema>; $push?: PushOperator<TSchema>; $pullAll?: PullAllOperator<TSchema>; $bit?: OnlyFieldsOfType< TSchema, NumericType | undefined, | { and: IntegerType; } | { or: IntegerType; } | { xor: IntegerType; } >;} & Document;
Modifiers
@public
type W
type W = number | 'majority';
Modifiers
@public
type WithId
type WithId<TSchema> = EnhancedOmit<TSchema, '_id'> & { _id: InferIdType<TSchema>;};
Add an _id field to an object shaped type
Modifiers
@public
type WithoutId
type WithoutId<TSchema> = Omit<TSchema, '_id'>;
Remove the _id field from an object shaped type
Modifiers
@public
type WithSessionCallback
type WithSessionCallback<T = unknown> = (session: ClientSession) => Promise<T>;
Modifiers
@public
type WithTransactionCallback
type WithTransactionCallback<T = any> = (session: ClientSession) => Promise<T>;
Modifiers
@public
Package Files (1)
Dependencies (3)
Dev Dependencies (50)
- @aws-sdk/credential-providers
- @iarna/toml
- @istanbuljs/nyc-config-typescript
- @microsoft/api-extractor
- @microsoft/tsdoc-config
- @mongodb-js/zstd
- @types/chai
- @types/chai-subset
- @types/express
- @types/kerberos
- @types/mocha
- @types/node
- @types/saslprep
- @types/semver
- @types/sinon
- @types/sinon-chai
- @types/whatwg-url
- @typescript-eslint/eslint-plugin
- @typescript-eslint/parser
- chai
- chai-subset
- chalk
- eslint
- eslint-config-prettier
- eslint-plugin-mocha
- eslint-plugin-prettier
- eslint-plugin-simple-import-sort
- eslint-plugin-tsdoc
- eslint-plugin-unused-imports
- express
- gcp-metadata
- js-yaml
- mocha
- mocha-sinon
- mongodb-client-encryption
- mongodb-legacy
- nyc
- prettier
- semver
- sinon
- sinon-chai
- snappy
- socks
- source-map-support
- ts-node
- tsd
- typescript
- typescript-cached-transpile
- v8-heapsnapshot
- yargs
Peer Dependencies (7)
Badge
To add a badge like this oneto your package's README, use the codes available below.
You may also use Shields.io to create a custom badge linking to https://www.jsdocs.io/package/mongodb
.
- Markdown[![jsDocs.io](https://img.shields.io/badge/jsDocs.io-reference-blue)](https://www.jsdocs.io/package/mongodb)
- HTML<a href="https://www.jsdocs.io/package/mongodb"><img src="https://img.shields.io/badge/jsDocs.io-reference-blue" alt="jsDocs.io"></a>
- Updated .
Package analyzed in 21532 ms. - Missing or incorrect documentation? Open an issue for this package.