Self-contained access tokens in the Connect2id server

This post explains the format of self-contained access tokens issued by the Connect2id server for OpenID Connect identity provision and SSO. It complements an earlier post about pros and cons of self-contained tokens in terms of processing and performance and how they compare with identifier tokens that act as lookup keys.

The Connect2id server can be configured to issue self-contained access tokens that represent a signed JSON Web Token (JWT) with the following fields:

  • sub [string] The subject (user) of the authorisation, e.g. “”. This is a standard JWT claim.
  • iss [string] The issuer of the authorisation, corresponds to the OpenID Connect provider’s identifier, e.g. This is a standard JWT claim.
  • iat [integer] The token issue time. This is a standard JWT claim.
  • exp [integer] The token expiration time. This is a standard JWT claim.
  • aud [string array] The token audience list, typically the intended protected resources represented by their URIs. This is a standard JWT claim.
  • jti [string] Secure unique identifier for the token, to enable token de-duplication. This is a standard JWT claim.
  • cid [string] Non-standard claim, represents the identifier of the client that received the token.
  • scp [string array] Non-standard claim, represents the authorised scope values, e.g. [“openid”, “email”, “app:read”, “app:write”].
  • clm [string array] Non-standard claim, represents the consented UserInfo claims to be released at the UserInfo endpoint, e.g. [“name”, “email”].
  • cll [string array] Non-standard claim, represents the preferred locales of the consented UserInfo claims, e.g. [“es-ES”, “en-GB”].
  • sid [string] Non-standard claim, represents the (browser) session identifier for the subject (user) with the OpenID Connect provider. Can be used to check if the user is still logged in, or to retrieve additional details from his/her IdP session.
  • dat [object] Non-standard claim, can be used to store additional data in an arbitrary JSON object, e.g. the user’s geolocation.

The Connect2id server configuration enables the administrator to choose which of the above fields get included in the self-contained access tokens. For example, the tokens can be configured to include only the issuer, the subject, the timestamps and the authorised scope:

"sub" : "alice",
"scp" : [ "openid", "email", "app:write" ],
"iss" : "",
"iat" : 1360050795,
"exp" : 1360053600,

The above claims set is then signed (JWS) using the RSA-SSA algorithm to produce the final JSON Web Token (JWT) similar to this (with extra line breaks):


The protected resources that receive the access token can verify it by using the server’s public RSA key, typically published at the OpenID Connect provider’s JWK set endpoint.

The Connect2id server has support for the following standard RSA signing algorithms for securing the self-contained access tokens:

  • RS256
  • RS384
  • RS512
  • PS256
  • PS384
  • PS512

Clients can make use of the open source NimbusDS JOSE+JWT library to verify the JWTs. Open source libraries for other languages than Java are also available on the web.


JSON Web Key (JWK) selectors

Release 2.21 of the NimbusDS Java library for encoding and decoding JSON Web Tokens (JWT) includes a handy selector for matching one or more JSON Web Keys (JWK) from a set according to chosen criteria.

OpenID Connect servers and clients that use public / private key cryptography publish their JWKs in a JSON file that the other party needs to process in order to extract the relevant key(s). For example, a client that needs to verify an RSA-signed ID token will have to get the server’s JWK set and find the matching public key used for the signature. The new utility class can help you with just that.

It is called com.nimbusds.jose.jwk.JWKSelector and supports key selection by:

  • Any, unspecified, one or more key types (kty).
  • Any, unspecified, one or more key uses (use).
  • Any, unspecified, one or more key algorithms (alg).
  • Any, unspecified, one or more key identifiers (kid).
  • Private only key.
  • Public only key.

Example usage:

// Create a new JWK selector and configure it
JWKSelector selector = new JWKSelector();

// Select public keys only

// RSA keys only

// No key use specified or signature
selector.setKeyUses(Use.SIGNATURE, null);

// Apply selector to JWK set
List matches =;

The complete configuration options of the JWK selector can be found in the JavaDocs.

The latest version of the Nimbus JOSE+JWT library is available on Maven Central.


The Nimbus JOSE+JWT library adds PS256, PS384 and PS512 signature support

Release 2.20 of the Nimbus JOSE+JWT library adds support for the JWS PS256, PS384 and PS512 signature algorithms, which are a form of RSA signatures with salt, as described in the JWA spec and in the authoritative RFC 3447 (RSASSA-PSS).

RSASSA-PSS reportedly offers a better security than the stock RSA PKCS #1 algorithm, but only marginally. If you consider switching to it the following discussion can provide you with additional information.

The new PS256, PS384 and PS512 signature algorithms are covered by the existing RSA signer and verifier classes:

  • com.nimbusds.jose.crypto.RSASSASigner
  • com.nimbusds.jose.crypto.RSASSAVerifier

You can get the new version from the download section of the project repo or preferably from Maven Central:


Example use:

KeyPairGenerator kpg = KeyPairGenerator.getInstance("RSA");

KeyPair kp = kpg.genKeyPair();
RSAPublicKey publicKey = (RSAPublicKey)kp.getPublic();
RSAPrivateKey privateKey = (RSAPrivateKey)kp.getPrivate();

// Need BouncyCastle for PSS

RSASSASigner signer = new RSASSASigner(privateKey);
RSASSAVerifier verifier = new RSASSAVerifier(publicKey);

JWSHeader header = new JWSHeader(JWSAlgorithm.PS256);
JWSObject jwsObject = new JWSObject(header, new Payload("Hello world"));


boolean verified = jwsObject.verify(verifier);

Note that RSASSA-PSS is not supported by the standard JCA provider (in Java 6 and 7), you’ll need one that provides it, such as BouncyCastle.

Nimbus JOSE+JWT 2.18

Just pushed a new release of the Nimbus JOSE+JWT library to Maven Central which updates the library to the latest JOSE -14 and JWT -11 drafts.

We added support for the new JWE algorithm identifiers and header parameters. Also, a few minor bugs got fixed under the hood. The Header class got two static helper method to enable parsing from JSON object strings and BASE64URLs.

The new release should reach the central repo soon, as version 2.18.

Enjoy! :)

Nimbus JOSE+JWT 2.17

Just released a new version of the Nimbus JOSE+JWT library which updates the code and docs to match the latest JOSE -12 and JWT -10 drafts:

  • [JWA] draft-ietf-jose-json-web-algorithms-12
  • [JWS] draft-ietf-jose-json-web-signature-12
  • [JWE] draft-ietf-jose-json-web-encryption-12
  • [JWK] draft-ietf-jose-json-web-key-12
  • [JWT] draft-ietf-oauth-json-web-token-10

A small assignment bug in JWTClaimsSet.setCustomClaims(Map<String,Object>) was also fixed.

The new library release should hit Maven Central soon.

Comments, suggestions and enhancements are welcome as always. Go to our issue tracker at .

Our OpenID Connect server

OpenID Connect IdPThe OpenID Connect spec suite settled considerably over the past few months and now that the work group has promoted it to 2nd implementers’ draft status we’re preparing to release our server implementation – called the Connect2id server – to early adopters.

What are its capabilities and what can enterprises expect?

  • The Connect2id server provides simple and robust Single Sign-On (SSO) and user provisioning to client applications based on the OpenID Connect / OAuth 2.0 code and implicit flows.
  • It issues ID tokens, to assert the user identity in a verifiable manner by means of a JWS signature.
  • It supports optional discovery of the OpenID Connect Server as well as dynamic client registration. The client registration can be operated in open or protected mode, depending on the organisation’s requirements.

Authorisation and access control

Organisations that wish to employ the OpenID Connect server as a generic OAuth 2.0 server can freely do so. The Connect2id server enables binding of arbitrary scope values and authorisation information to an access token. The access token can then be used with any third party web API or protected resource.

LDAP integration

The Connect2id server can be deployed on the organisation’s premise and linked to any LDAP v3 compatible directory, such as Microsoft ActiveDirectory, to verify the credentials of the users who sign in with OpenID Connect and then to provision approved details to client apps at the UserInfo endpoint.

Strong authentication

Additional user authentication factors (2FA) and risk management controls can be added via a flexible web-based plug-in interface provided by the Connect2ID server. We know that customer requirements regarding security can vary significantly in terms of policy and detail, so we took care to design a plugin API that can meet them all in the most effective manner possible.

Clustering support for high-availability and confident scaling

The Connect2id server is engineered for clustered operation from the ground-up. Customers can run multiple nodes which can also span data centres to cater for client applications on a global scale. This gives a peace of mind when hardware and network failures do occur and also provides for effective load-balancing and distributed operation with very large user bases.

Get in touch with us to find out more about OpenID Connect and how it can improve identity management in your business. We have built up considerable expertise on OpenID Connect, from our participation in the various standard making bodies involved and from building concrete software implementations, and we’re ready to answer even the most complicated questions that you may have.

Following this overview I’ll continue with the technical details for the first release of the Connect2id server. As mentioned, it is based on the OpenID Connect drafts that the WG deemed fit to enter the 2nd implementers’ draft stage.

The OpenID Connect specs are:

  • OpenID Connect Messages 1.0 – draft 20
  • OpenID Connect Standard 1.0 – draft 21
  • OpenID Connect Session Management 1.0 – draft 15
  • OAuth 2.0 Multiple Response Type Encoding Practices – draft 08
  • OpenID Connect Discovery 1.0 – draft 17
  • OpenID Connect Dynamic Client Registration 1.0 – draft 19

The complementary JavaScript Object Signing and Encryption (JOSE) and JSON Web Tokens (JWTs) are based on the following specs:

  • draft-ietf-jose-json-web-algorithms-11
  • draft-ietf-jose-json-web-signature-11
  • draft-ietf-jose-json-web-encryption-11
  • draft-ietf-jose-json-web-key-11
  • draft-ietf-oauth-json-web-token-08

At its core the Connect2id server acts as an OAuth 2.0 authorisation server which is described in the following specs:

  • OAuth 2.0 (RFC 6749)
  • OAuth 2.0 Bearer Token (RFC 6750)
  • draft-ietf-oauth-jwt-bearer-05
  • draft-ietf-oauth-dyn-reg-12

The Connect2id server supports two methods for authenticating client applications and securing the ID tokens issued to them:

  • Each client can be provisioned with a client secret. Such clients can then authenticate at the token endpoint with the client_secret_basic, client_secret_post and client_secret_jwt methods. Issued ID tokens are then secured with a JWS signature based on the HS256, HS384 or HS512 algorithms. Clients that are able to process RSA-based JWS signatures can opt to use the RS256, RS384 or RS512 algorithms instead.
  • Each client can supply a public RSA key to the server when it registers. Such clients can then authenticate at the token endpoint with the private_key_jwt method. Issued ID tokens will then be secured with a JWS signature based on the RS256, RS384 or RS512 algorithms (using the server’s own RSA key).

An OAuth 2.0 / OpenID Connect client registration endpoint is provided to allow for provisioning, updating and deleting client applications. Customers have the choice to configure this endpoint for public registration or to restrict registration to trusted parties only. The actual registration can then be done through the endpoint web API or through a manual UI form.

Organisations that want to use access tokens to access other protected resources (not just the UserInfo endpoint) can do so by configuring their own custom scope values and consent policies (implicit / explicit). The issued access tokens can then be verified by a RESTful call to the Connect2id server or by decoding the authorisation information that is contained within the token.

OpenID Connect LDAP schema update

The LDAP schema for storing the details of registered OpenID Connect clients was updated to match the latest version 19 of the registration draft. It comes with an open source Apache 2.0 license and you can use it to store all OpenID Connect related registration details in a LDAP directory, such as OpenDJ or Microsoft Active Directory:

  • The client identifier, access token and optional secret provisioned by the OpenID Connect server.
  • The client metadata, with optional language tags for human facing content, such as client name, logo, the selected JOSE algorithms for securing the various messages and tokens.

The schema was successfully deployed and tested on a OpenDJ 2.4.6 server.

OpenID Connect LDAP schema

The OpenID Connect SDK reaches an important milestone

OpenID Connect IdPThis month Nimbus Directory Services reached an important milestone by releasing version 2.1 of the OAuth 2.0 SDK with OpenID Connect extensions. At present this is the most comprehensive Java library for designing client and server apps aiming to adopt the new open protocol for web + mobile single sign-on (SSO), which is about to go into its second implementer’s draft, and hopefully a final release by the end of 2013 or beginning 2014.

OpenID Connect is actually a whole suite of specs, based on the core OAuth 2.0 RFC. In its current release the SDK covers the following cornerstone documents:

  • The OAuth 2.0 Authorization Framework (RFC 6749)
  • The OAuth 2.0 Authorization Framework: Bearer Token Usage (RFC 6750)
  • OpenID Connect Messages, draft 20.
  • OpenID Connect Standard, draft 21.
  • OpenID Connect Discovery, draft 17.
  • OpenID Connect Registration, draft 19.
  • OAuth 2.0 Multiple Response Type Encoding Practices, draft 08

The success of the Nimbus JOSE+JWT library brought many lessons and much of an inspiration to us, and we hope that the OpenID Connect SDK will follow in its footsteps, to become at least just as useful and reliable, not only for our own products, like the Nimbus OpenID Connect Server (OP), but to the broader developer community as well.


Implementing OAuth 2.0 access tokens

Developers of OAuth 2.0 servers eventually face the question how to implement the required access tokens. The spec doesn’t mandate any particular implementation, and there is a good reason for that – the client treats them as opaque strings that are passed with each request to the protected resource / web API. It is up to the server to decide how they are created and what information gets encoded into them. The OAuth 2.0 RFC does however mention the two possible strategies:

  • The access token is an identifier that is hard to guess, e.g. a randomly generated string of sufficient length, that the server handling the protected resource can use to lookup the associated authorisation information, e.g. by means of a network call to the OAuth 2.0 server that issued the authorisation.
  • The access token self-contains the authorisation information in a manner that can be verified, e.g. by encoding the authorisation information along with a signature into the token. The token may also be optionally encrypted to ensure the encoded authorisation information remains secret to the client.

What are the pros and cons of each strategy?

Identifiers as access tokens, authorisation resolved by lookup

  • Pros:
    • Short access token strings. A 16 bit value with sufficient randomness to prevent practical guessing should suffice for most cases.
    • The authorisation information that is associated with the access token can be of arbitrary size and complexity.
    • The access token can be revoked with almost immediate effect.
  • Cons:
    • A network request to the OAuth 2.0 server is required to retrieve the authorisation information for each access token. The need for subsequent lookups, up to the access token expiration time, may be mitigated by caching the access token / authorisation pairs. Caching, however, will be at the expense of increasing the worst case time scenario for token revocation.

Self-contained access tokens, encoded authorisation protected by signature and optional encryption

  • Pros:
    • No need for a network call to retrieve the authorisation information as it’s self-contained, so access token processing may be significantly quicker and more efficient.
  • Cons:
    • Significantly longer access token strings. The token has to encode the authorisation information as well as the accompanying signature. Encrypted tokens become longer still.
    • To enable tight revocation control, the access tokens should have a short expiration time, which may result in more refresh token requests at the OAuth 2.0 server.
    • The server handling the protected resource must have the necessary tools and infrastructure to validate signatures and perform decryption (if the access tokens are encrypted) and to manage the required keys (shared or public / private, possibly certificates) for that.

How do the two approaches compare in practice?

Network lookups practical only when the OAuth 2.0 server is on the same host or network segment

We first ran a number of tests against our Nimbus OAuth 2.0 Server which keeps track of the issued access and refresh tokens and their matching authorisations. Each authorisation is checked by passing the access token to the RESTful web API of the server, which if valid returns the matching authorisation as a JSON object:

  "iss"   : "",
  "iat"   : 1370598200,
  "exp"   : 1370600000,
  "sub"   :"",
  "scope" : "openid profile email webapp:post webapp:browse",
  "aud"   : ["", ""]

When the application server handling the protected resource and the OAuth 2.0 Server are situated on the same LAN segment, the RESTful request to check the access token takes about 5 to 10 milliseconds.

In cases when the RESTful request has to go out and across the internet (the OAuth 2.0 Server was installed in the AWS cloud) the time to retrieve the authorisation information increased to about 80 – 100 milliseconds.

This simple test shows clearly that using a lookup to check the token authorisation is only practical when the token consumer (the application) and the OAuth 2.0 server reside on the same host or LAN, or perhaps within the same data centre. Otherwise the response time can become so large as to render the entire application unusable.

Signed self-contained access tokens enable sub-millisecond verification, longer keys can however significantly affect processing

We implemented self-contained access tokens by using JavaScript Object Signing (JWS) with an RSA algorithm on the JSON object that represents the authorisation. For that we utilised the open source Nimbus JOSE + JWT library.

The resulting token (from the above example) is approximately 500 characters in size (with line breaks for clarity):


For the RSA signatures we generated 1024 bit keys which is the minimum RSA recommendation for corporate applications. We also did a round of tests with 2048 bit keys, which is the minimum recommended size specified in the  JOSE specification.

We created a benchmark to measure the average time to validate an RSA signature after the access token has been parsed to a JWS object. The code was run on a AWS instance and recorded the following results per JWS algorithm:

With 1024 bit RSA keys:

  • RS256: 118 microseconds
  • RS384: 173 microseconds
  • RS512: 195 microseconds

With 2048 bit RSA keys:

  • RS256: 382 microseconds
  • RS384: 386 microseconds
  • RS512: 396 microseconds


RSA signature verification benchmark results

RSA signature verification benchmark results

It becomes apparent that doubling the key size slows down signature verification by roughly the same factor. The overhead of the SHA-384 and SHA-512 functions is almost irrelevant in the overall computation.

Comparing token verification performance

The self-contained + RSA signed access tokens emerge as the clear winner from this benchmark, by a factor of at least ten. There is potential for further performance optimisation, by caching the access tokens that have already been verified. The payload size and the encoding method, however, has to be carefully managed, due to the general URL length restriction of about 2000 characters (which affects access tokens in the implicit OAuth 2.0 flow that are passed as an URL component).

Having said that, access tokens that are resolved by a call to the OAuth 2.0 authorisation server still have their place, in cases when the protected resource resides on the same host or LAN segment (e.g. the UserInfo endpoint in OpenID Connect).

Hybrid access tokens

The initial development version of the Nimbus OAuth 2.0 / OpenID Connect server issued tokens that consumers could verify by means of a RESTful web call. To support applications further afield we decided to add support for self-contained tokens, using RSA signatures and optional encryption to verify and protect the encoded authorisation information. Applications that have received a self-contained (signed) access token can still verify it by means of a call to the OAuth 2.0 server, passing its content as an opaque string. We call these “hybid access tokens”.

JWS benchmark Git repo

The JWS benchmark cited in this article is available as an open source project at

Feel free to extend it with additional signature algorithms if you wish to explore other means, such as Elliptic Curve signatures, to create verifiable tokens. Comments and feedback are welcome as always.



Nimbus JOSE+JWT 2.16

Today the Nimbus JOSE + JWT library was updated to the latest spec drafts, released earlier this week by the WG editor.

  • [JWA] draft-ietf-jose-json-web-algorithms-11
  • [JWS] draft-ietf-jose-json-web-signature-11
  • [JWE] draft-ietf-jose-json-web-encryption-11
  • [JWK] draft-ietf-jose-json-web-key-11
  • [JWT] draft-ietf-oauth-json-web-token-08

Three major areas are affected:

  • Three optional parameters were added to the JWK object: x5u, x5t and x5c, intended to add X.509 certificate information to a key.
  • The MIME types of several JOSE objects were corrected to comply with the standard format.
  • The JWE encrypted key is no longer used in AAD composition. This means that JWE encryption and decryption is no longer compatible with previous versions of the library.

The full list of changes can be found out in the spec history and the library change log.

Special helpers (using the builder pattern) were introduced to simplify the construction of JWKs, which now carry over a dozen parameters, while keeping the JWK classes immutable.

Example builder use:

RSAKey key = new RSAKey.Builder(n, e).

The new version of the library should reach Maven Central today. The library wiki and online JavaDocs were updated too. We’re looking for contributors to help us implement the remaining optional JWS and JWE algorithms, also to extend the available online documentation with more examples and perhaps a few tutorials.

Assemble your own OpenID Connect identity provider with NimbusDS components

OpenID Connect IdPPractical digital identity has to be simple and reliable, yet also extensible and flexible to span various applications and contexts. Organisations differ, and so do the ways in which they authenticate users, attribute authorisations to them and then consume the resulting identity data. With that understanding in mind we figured out that a one-size-fits-all OpenID Connect IdP solution will do poorly in practice, when concrete customer needs have to be faced.

Go minimal or fully customised – the choice is yours

The OpenID Connect solution of NimbusDS was designed to make it easy for customers to become an IdP for their employees or subscribers, then customise it where they see fit:

  • Fully customisable login, consent and authorisation management pages. You can stick with the default login pages, or you can design your own to suit the appearance of your company identity or customise the user experience in the browser / mobile device. Moreover, the UI pages can be hosted on any server, at any URL, written in any language such Java/JSP, PHP, RoR, etc.
  • Pluggable authentication. NimbusDS supports username / password authentication out of the box, based on our existing AuthService software. You can replace it by your own authentication mechanisms, or add additional factors such as biometrics or token devices to achieve 2FA.
  • User sessions. The user sessions with the OpenID Connect IdP can be configured to match your policies for session duration, idle time and maximum number of associated browsers / devices.
  • Support additional scopes for other OAuth 2.0 applications. The access token issued by the OpenID Connect Server can be furnished with additional scopes, beyond the standard ones, to grant users access to other protected resources, and not just the OIDC UserInfo endpoint. The scopes attached to the token can be explicitly or implicitly consented to, based on your user / application / other policy.
  • Include custom claims in the ID Token: The ID token issued by OpenID Connect can be set to include additional arbitrary claims, such as details of the authentication event (IP address, geolocation) and user attributes.

Identity provision based on a array of simple web services

How is this level of customisation achieved? By a server back-end that is not monolithic, but based on a array of nimble RESTful /  JSON-RPC 2.0 web services, where each service caters for a specific, tightly defined task. The various tasks then put together make up the overall IdP process:

  • A web API for decoding and encoding the OAuth 2.0 / OpenID Connect protocol requests and responses exchanged with relying parties (RP). This API is used for example by the UI page for the OIDC authorisation endpoint, to render the consent form and specify which claims / scopes the user should explicitly agree on and which will be implicitly set by the IdP, based on a policy.
  • A web service for authenticating users, such as Nimbus AuthService that handles MS-AD / LDAP based authentication. Its web API is addressed by the UI pages that direct the login flow.
  • A web service for tracking the user sessions, such as NimbusSSO. It handles session creation, update, expiration and removal events. Its web API can be scripted to monitor which users are currently online and their associated browsers / devices as well.
  • An OAuth 2.0 authorisation store, which keeps track of all issued authorisation codes, access and refresh tokens. It provides a RESTful API to allow for token revocation and monitoring token usage by applications.
  • An OAuth 2.0 client registry, which contains the details and credentials / RSA / EC keys of all applications, manually or dynamically registered.
  • A user store, based on a directory, accessed either directly via LDAPv3 or via Json2Ldap.

Example: Extending OAuth 2.0 authorisations with additional scope values

The following example illustrates how an identity provider may add additional scope values to a user’s authorisation (represented by the OAuth 2.0 access token, typically of type Bear). Apart from providing single sign-on based on OpenID Connect, the IdP server may also be used as a generic OAuth 2.0 server, to grant fine-grained access to the application or access to third-party services and applications.

  • A user from the accounting department who logs in with OpenID Connect to the payroll application may be granted access to specific payment APIs.
  • A user from the IT support department who logs in with OpenID Connect to an admin dashboard app may be granted access to specific server management APIs.
  • A user from the legal department who logs in with OpenID Connect to a documentation system may be granted access to specific confidential documents.

In the context of OpenID Connect the access token is used to give the application access to the user’s profile details at the UserInfo endpoint. The scope of the access token can however be extended to grant access to additional protected resources, by including values that are recognised by these resources.

OAuth 2.0 scope for pure OpenID use only:

openid profile email

OAuth 2.0 scope with additional values recognised by other applications:

openid profile email payroll:disburse payroll:settle

The access scope that is granted to a particular user is determined during the OAuth 2.0 / OpenID Connect authorisation step. The scope to grant can be determined by explicit consent and / or by looking up the user’s and the client application’s attributes (e.g. user membership of an LDAP group, application trust level).

The additional scope values can be determined by calling an arbitrary script or executable code, which when completed should return a JSON array with the granted scope values. For example:

["openid", "profile", "email", "payroll:disburse", "payroll:settle"]

The scope is then used to compose the final OAuth 2.0 / OpenID Connect authorisation, which is passed to the OIDC server as a JSON object via a RESTful HTTP POST:

POST /authorize/67ee5264-26ff-45cd-a876-5de22fbb99b1 HTTP/1.1 
Content-Type: application/json
 "sub"      : "",
 "ACR"      : ["3"],
 "authTime" : 2147483647,
 "scope"    : ["openid", "profile", "email", 
               "payroll:disburse", "payroll:settle"],
 "userInfo" : ["sub", "name", "given_name", "family_name",
               "profile", "email", "email_verified", "locale"]

Upon received the authorisation, the OIDC server generates the required authorisation code (if we have a code flow) and then a matching access / refresh token pair for the authorisation.

The application can validate the access token and get its matching authorisation by making a RESTful query to the OIDC server. We also consider adding support for access tokens which include the authorisation in a JWS-signed / JWE-encrypted form, to save the need for doing lookup over the network.

The following presentation slides also give a nice overview of how the OpenID Connect IdP solution is built and how its various web service components interact with each other:
Nimbus OIDC server presentation

Nimbus JOSE+JWT 2.15.1: Quicker loading of RSA encrypters

Nimbus JOSE+JWT 2.15.1 is a maintenance release of the Java library for signing and encrypting JSON Web Tokens (JWTs) and other payloads.

What’s in it?

  • Instantiation of RSAEncrypter and DirectEncrypter objects will now happen a lot faster. There is no further need to reuse these encrypter objects in order to maintain performance. Should you wish, you can now create a new encrypter object for each JWE message that needs to be produced, with virtually no performance penalty. This was achieved by making the SecureRandom PRNG for outputting IVs a static class member. The initial seeding of the PRNG is typically a time consuming process to guarantee sufficient entropy (we measured up to 1+ second for that in tests), so the logical solution was to move the seeding procedure out of the encrypter constructors. Thanks to Dr. Michael Scott from CertiVox and Juraj Somorovsky from Uni Bochum for checking that IV security was preserved while this was done.
  • You can now pass shared secrets encoded as UTF-8 strings to the MACSigner and MACVerifier.
  • The Base64URL class was refactored to extend the general Base64 class, which made the overall code leaner and simpler to maintain.

The library JAR is distributed via Maven Central. You can also get it from the download section of the Git repo for the JOSE+JWT library.

Nimbus JOSE+JWT updated to draft suite 10

The Nimbus library for processing JWS/JWE/JWK/JWT objects in Java was updated to comply with the latest draft suite v10 released by the JOSE WG:

An important change is the new method for authenticated AES/CBC encryption based on draft-mcgrew-aead-aes-cbc-hmac-sha2-01 – Authenticated Encryption with AES-CBC and HMAC-SHA. This replaced the previously used method based on a concatenating KDF.

Other changes include the introduction of an “crit” header parameter to designate custom JWS/JWE headers that shouldn’t be ignored by clients, also several changes in terminology and a change in AAD computation for AES/GCM to allow multiple recipients. The complete change log can be found in the draft document histories and the respective CHANGELOG file in the library package.

The new library version should appear in Maven Central within a few hours.

Thanks to everyone who contributed, also to our colleagues on the JOSE WG who continue work on refining the specs.

JWS and JWE to secure tokens, messages and channels

Hard work pays off. The positive and encouraging feedback that we started receiving from early adopters of Nimbus JOSE + JWT for Java was a great inspiration and at same time motivation to continue refinement of the library.

Developers cited the new JSON-based formats for message signing and encryption, JWS and JWE, being a significant improvement in terms of simplicity and programming, particularly in comparison with the existing XML digital signature standard. A lot of effort was put to ensure this simplicity was also carried in the library, by making its application facing calls and class structures as simple and intuitive as possible. The same approach was applied to the API for plugging crypto algorithms in, which should make tracking draft changes to JOSE algorithms easier too.

Our main intended use of JWS/JWE/JWT is for generating and processing tokens in OpenID Connect. Last week we found a nice use case for JWE to protect identity data that is exchanged between Nimbus AuthService agents and the SaaS apps of customers. Other users have reported success in using JWS/JWE to exchange patient records between hospitals, sign transactions and encrypt message channels between web services.

Huge thanks to the JOSE WG for crafting this specification as well as to all developers who contributed to the library and continue to do so.


Direct JWE encryption

Release 2.14 of the Nimbus JOSE + JWT library adds support for direct JWE encryption using shared symmetric keys. Our particular use case for that is to secure synchronisation of identity data between an on-premise user authentication agent and an OpenID Connect provider service that we’re currently developing.

Direct JWE encryption comes in handy when you have control over both communication endpoints and wish to save the overhead of issuing public / private key pairs, such as RSA, and then wrapping / unwrapping them for the actual encryption / decryption.

Direct encryption is represented by setting the algorithm (alg) header parameter to “dir” and then specifying the actual encryption method to apply on the plain text, e.g. 128-bit AES/GCM:

{ "alg":"dir", "enc":"A128GCM" }

Programmaticaly you can create the header like this:

import com.nimbusds.jose.*;
JWEHeader header = new JWEHeader(JWEAlgorithm.DIR, EncryptionMethod.A128GCM);

You can then complete the JWE object construction:

JWEObject jweObj = new JWEObject(header, new Payload("Hello world!"));

To encrypt the JWE object you need a key with the matching length for A128GCM encryption, that is 128 bits or 16 bytes. The key must be shared between sender and recipient and kept secret, of course.

import com.nimbusds.jose.crypto.*;

byte[16] key = { ... };
DirectEncrypter encrypter = new DirectEncrypter(key);

// You can now use the encrypter on one or more JWE objects 
// that you wish to secure
String jweString = jweObject.serialize();

The resulting JWE object string will then look similar to this:


Notice that the encrypted key part (the second part) of the JWE string is missing. RSA and ECC encryption will have it, but direct encryption not.

To decrypt the JWE string on the other end you need the same symmetric key.

jweObject = JWEObject.parse(jweString);
JWEDecrypter decrypter = new DirectDecrypter(key);

// Prints "Hello world!"

The Nimbus JOSE+JWT library supports all standard encryption methods for direct JWE:

  • A128CBC+HS256 – requires 256 bit symmetric key
  • A256CBC+HS512 – requires 512 bit symmetric key
  • A128GCM – requires 128 bit symmetric key
  • A256GCM – requires 256 bit symmetric key

Nimbus JOSE+JWT 2.13.1 maintenance release

We just released a maintenance release of the Java library for creating and processing plain, JWS and JWE objects and JSON Web Tokens (JWTs).

The new release fixes a JWT time unit representation bug that used milliseconds instead of seconds for the exp, nbf and iat claims.

It also ensures that Header.toBase64() returns the original parsed string for JOSE objects that are being processed. This bug was causing false HMAC integrity check rejections in situations where the sending and the receiving system produce JSON headers with different serialisations.

Thanks to Jochem Berndsen and to Wisgary Torres from the Microsoft XBox team for reporting these issues.

The new release should become available on Maven Central within a few hours.

RSA encryption added to the Nimbus JOSE + JWT library

The new 2.13 release of the Java library for Javascript Object Signing and Encryption (JOSE) and JSON Web Tokens (JWT) adds support for RSA encryption. You can now create encrypted payloads and tokens with RSAES-PKCS1-V1_5 and  RSAES-OAEP, using the authenticated AES/CBC+HMAC and AES/GCM encryption methods.

The following example demonstrates the typical lifecycle of an encrypted JWT, secured with RSA-OAEP and 128-bit AES/GCM.

// Compose the JWT claims set
JWTClaimsSet jwtClaims = new JWTClaimsSet();
List<String> aud = new ArrayList<String>();
// Set expiration in 10 minutes
jwtClaims.setExpirationTime(new Date(new Date().getTime() + 1000*60*10));
jwtClaims.setNotBeforeTime(new Date());
jwtClaims.setIssueTime(new Date());

// Produces 
// { 
//   "iss" : "https:\/\/",
//   "sub" : "alice",
//   "aud" : [ "https:\/\/" , "https:\/\/" ],
//   "exp" : 1364293137871,
//   "nbf" : 1364292537871,
//   "iat" : 1364292537871,
//   "jti" : "165a7bab-de06-4695-a2dd-9d8d6b40e443"
// }

// Request JWT encrypted with RSA-OAEP and 128-bit AES/GCM
JWEHeader header = new JWEHeader(JWEAlgorithm.RSA_OAEP, EncryptionMethod.A128GCM);

// Create the encrypted JWT object
EncryptedJWT jwt = new EncryptedJWT(header, jwtClaims);

// Create an encrypter with the specified public RSA key
RSAEncrypter encrypter = new RSAEncrypter(publicKey);

// Do the actual encryption

// Serialise to JWT compact form
String jwtString = jwt.serialize();

// Produces 
// eyJhbGciOiJSU0EtT0FFUCIsImVuYyI6IkExMjhHQ00ifQ.K52jFwAQJH-
// DxMhtaq7sg5tMuot_mT5dm1DR_01wj6ZUQQhJFO02vPI44W5nDjC5C_v4p
// W1UiJa3cwb5y2Rd9kSvb0ZxAqGX9c4Z4zouRU57729ML3V05UArUhck9Zv
// ssfkDW1VclingL8LfagRUs2z95UkwhiZyaKpmrgqpKX8azQFGNLBvEjXnx
// -xoDFZIYwHOno290HOpig3aUsDxhsioweiXbeLXxLeRsivaLwUWRUZfHRC
// _HGAo8KSF4gQZmeJtRgai5mz6qgbVkg7jPQyZFtM5_ul0UKHE2y0AtWm8I
// zDE_rbAV14OCRZJ6n38X5urVFFE5sdphdGsNlA.gjI_RIFWZXJwaO9R.oa
// E5a-z0N1MW9FBkhKeKeFa5e7hxVXOuANZsNmBYYT8G_xlXkMD0nz4fIaGt
// uWd3t9Xp-kufvvfD-xOnAs2SBX_Y1kYGPto4mibBjIrXQEjDsKyKwndxzr
// utN9csmFwqWhx1sLHMpJkgsnfLTi9yWBPKH5Krx23IhoDGoSfqOquuhxn0
// y0WkuqH1R3z-fluUs6sxx9qx6NFVS1NRQ-LVn9sWT5yx8m9AQ_ng8MBWz2
// BfBTV0tjliV74ogNDikNXTAkD9rsWFV0IX4IpA.sOLijuVySaKI-FYUaBy
// wpg

// Parse back
jwt = EncryptedJWT.parse(jwtString);

// Create a decrypter with the specified private RSA key
RSADecrypter decrypter = new RSADecrypter(privateKey);

// Decrypt

// Retrieve JWT claims

Check out src/test/java/com/nimbusds/jwt/ for the complete code.

Don’t forget that the library JavaDocs are your friend. They include full description of all classes and methods, references to the applicable JOSE and other spec sections as well as short code snippets.

Thanks for Axel Nennker, David Ortiz and Juraj Somorovsky from the JOSE community who contributed with concrete code and suggestions.

Note that the current A128CBC+HS256 and A256CBC+HS512
specifications may get revised by the JOSE WG to switch to draft-mcgrew-aead-aes-cbc-hmac-sha2-01 for appending authenticated encryption to AES/CBC.

Json2Ldap adds directory server fail-over and load-balancing

masthead-json2ldapThe new year began with the third major release of Json2Ldap, the established middleware for web-friendly JSON-RPC access to directory servers such as MS Active Directory, OpenLDAP and OpenDJ.

This is a summary of what’s in the new 3.0 version.

Directory server fail-over and load-balancing

Resilience and scalability is a common requirement in enterprise and SaaS applications. You can now configure Json2Ldap to fail-over between two or more directory servers, or alternatively, to perform round-robin directory server selection.

Fail-over is a common strategy to provide backup in case your primary LDAP server fails. To handle such situations you can specify a secondary, tertiary, and so on servers for Json2Ldap to connect to if the primary becomes unavailable.

To configure fail-over:

json2ldap.defaultLDAPServer.selectionAlgorithm = FAILOVER
json2ldap.defaultLDAPServer.url = ldap:// ldap://
json2ldap.defaultLDAPServer.connectTimeout = 500

The above configuration will set a secondary LDAP server on IP Json2Ldap will direct new ldap.connect requests to it if it’s not able to connect to the primary server within 500 milliseconds.

Round-robin is the alternative server selection method. It provides for load-balancing between multiple servers and is also a form of fail-over in case one or more of the servers in the set become unavailable.

To configure round-robin selection with two servers, using again a connect time-out of 500ms:

json2ldap.defaultLDAPServer.selectionAlgorithm = ROUND-ROBIN
json2ldap.defaultLDAPServer.url = ldap:// ldap://
json2ldap.defaultLDAPServer.connectTimeout = 500

You can find out more about configuring fail-over and round-robin operation in the Json2Ldap docs on specifying the default LDAP server.

Support for Virtual-List-View searches

The Virtual-List-View (VLV) control (draft-ietf-ldapext-ldapv3-vlv-09) allows web clients to retrieve an arbitrary subset of an LDAP search result. You can think of it as a sophisticated makeover of the Simple Paged Results (RFC 2696) control. The VLV control allows you to specify an offset into the result set and a number of entries to get before and after the offset position. This can for instance be of great help in devising efficient browsing web UIs when the result sets are expected to be huge.

To apply VLV to an request add a parameter like this:

"vlv" : { "offset":50, "after":24 }

This will ask to return 25 entries at offset fifty. Note that the VLV control also requires the presence of a sort control in order to determine the entry order, e.g.:

"sort" : { "key":"givenName" }

I have put together a list of the directory servers that support Virtual-List-View. A brief look at the list will show you that it’s widely supported. You may however have to create specific indexes in your directory to enable it. Check you server documentation for that.

Use of the VLV control in Json2Ldap is detailed in the API docs.

Normalising attribute names

The attribute names that Json2Ldap returns with ldap.getEntry, and ldap.getRootDSE appear as defined in the directory schema. However, if you don’t have prior knowledge of their letter-case retrieving them from the result JSON object can be a bit of a problem.

var givenName = result["givenName"];

The solution to this has been to normalise all names, e.g. converting them to lower-case, before processing the result.

Json2Ldap 3.0 added a new optional parameter “normalize” to solve this problem. With it all attribute names in the result will be normalised, or converted to lower-case. So you don’t have to worry about the extra conversion. It will be done for you, quite efficiently, inside Json2Ldap.

Retrieving subordinate subtree entries

The SUBORDINATE_SUBTREE scope parameter has been renamed to SUBORDINATES. This is the only breaking change in the Json2Ldap web API and was done to make the parameter compatible with the LDAP URL schema.

API keys

You can now also specify API keys to ensure only selected web clients can access the web API of Json2Ldap. These are passed as an optional “apiKey” parameter to each web call. You can define API keys for global access as well as keys that are only for certain API calls. Check out the API key configuration to find out more.

The classic X.509 client certificate access controls are still supported.

Simplified configuration files

Json2Ldap 3.0 moved the configuration parameters from the web.xml file to a set of simpler text properties files that continue to follow the established semantics. Many of the configuration properties were also given better and more intuitive names. As a result the whole task of configuring Json2Ldap should have become easier and less error-prone (by switching from XML to simple key / value properties). Adding a web UI configuration editor is on the roadmap for 2013 / 2014.

Ready to try out Json2Ldap 3.0?

You can get a copy of Json2Ldap 3.0 for evaluation from the download section. Existing subscribers should have already been notified and received their download links for the full version.

You’re welcome to get in touch with us if you have questions about the new version or feedback to share.



The full 3.0 change log is on the Json2Ldap spec page.


Nimbus JOSE + JWT 2.10 is on Maven Central now

The latest stable release of the Java library for handling JOSE + JWT messages is now available on Maven Central.|537656995

Thanks to Justin Richer from MITRE for taking care of all necessary work to package and submit the library to the repo.

Earlier this month the library was updated to the latest -08  draft suite of JOSE as well as JWT -06. We’re still looking for a Java developer to contribute support for the standard JWE algorithms. Signature support for the standard HMAC, RSASSA and ECDSA algorithms has been fully available since the first release.

The library Git repo is at

The sources come with ample documentation and include useful examples: