OpenSSL PKCS7 verification and certificate "Extended Key Usage" extension

Problem

You verify a signature of PKCS#7 structure with OpenSSL and get error

  unsupported certificate purpose

This post explains the reason for this error and ways to proceed.

Background

By "verify a signature", one probably means that:

  1. The signature itself (e.g. an RSA block) taken over the corresponding data (or its digest) validates against the signing certificate.
  2. Two sets of certificates are available, which we'd call "trusted certificates" and "chaining certificates". A chain from the signing certificate up to at least one of the trusted certificates can be built with the chaining certificates.
  3. All certificates in this chain have "acceptable" X.509v3 extensions.

The first requirement is clear.

The second one is clear when the sets are defined. OpenSSL API requires them to be passed as parameters for the verification.

The last requirement relies on X.509v3 extensions, which are a terrible mess.

It's hard to provide a non-messy solution for a messy specification. Section CERTIFICATE EXTENSIONS in the OpenSSL manual for x509 subcommand has this passage:

The actual checks done are rather complex and include various hacks and
workarounds to handle broken certificates and software.

It looks like PKCS7 verification fell victim of these "hacks and workarounds".

OpenSSL certificate verification and X.509v3 extensions

Before getting to the topic (verifying PKCS#7 structures), look at how OpenSSL verifies certificates. Both command-line openssl verify and C API X509_verify_cert() have a notion of purpose, explained in the section CERTIFICATE EXTENSIONS of man x509. This notion seems to be particular to OpenSSL.

  • If the purpose is not specified, then OpenSSL does not check the certificate extensions at all.
  • Otherwise, for each purpose, OpenSSL allows certain combinations of the extensions.

The correspondence between OpenSSL's purpose and X.509v3 extensions is nothing like one-to-one. For example, purpose S/MIME Signing (or in short variant smimesign) requires that:

  1. "Common S/MIME Client Tests" pass (description of how they translate to X.509v3 extension takes a long paragraph in man x509).
  2. Either KeyUsage extension is not present, or it is present and digigalSignature bit is set.

For another example, there seems to be no OpenSSL command-line option for verify to require presense of Extended Key Usage bits like codeSigning. For that, one must use C API to separately check every extension bit.

So far, this sounds about as logical as it could be to somehow handle The Terrible Mess of X.509v3 extensions. OpenSSL CLI seems to have made an attempt to compose some "frequently used combinations" of the extensions and call them with own term "purpose".

OpenSSL PKCS#7 verification and X.509v3 extensions

By reason unknown yet to the author, OpenSSL uses a different strategy when verifying PKCS#7.

Command-line

There are two command-line utilities which can do that: openssl smime -verify and openssl cms -verify (S/MIME and CMS are both PKCS#7). Both accept -purpose option, which according to manual pages has the same meaning as for certificate verification. But it does not. These are the differences:

  1. If no -purpose option is passed, both commands behave as though they received -purpose smimesign.

  2. It is possible to disable this smimesign purpose checking by passing -purpose any.

C API

On the C API side, one is supposed to use PKCS7_verify() for PKCS#7 verification. This function also behaves as though it verifies with smimesign purpose. (see setting X509_PURPOSE_SMIME_SIGN in pk7_doit.c:919).

This again means that verification fails unless your signing certificate satisfies the two conditions:

  1. If the Extended Key Usage extension is present, then it must include email protection OID.
  2. If the Key Usage extension is present, then it must include the digitalSignature bit.

Similarly as with the command-line, it is possible to disable checking the extensions, although with more typing.

In the C API, the verification "purpose" is a property of X509_STORE, passed to PKCS7_verify(), which plays the role of the trusted certificate set.

Side note: manipulation of the parameters directly on the store was added only to OpenSSL 1.1.0 with X509_STORE_get0_param(X509_STORE *store). In earlier versions, an X509_STORE_CTX must have been created from the store and parameters manipulates with X509_STORE_CTX_get0_param(). BTW support for OpenSSL v1.0.1 has ended just on the day of this writing.

Demo

Prepare the files

Create a chain of certificates: self-signed "root", then an "intermediate" signed by the root, then a "signing" signed by the intermediate.

Write appropriate OpenSSL config files:

Create requests for all the three:

  $ openssl req -config openssl-CA.cnf -new -x509 -nodes -outform pem -out root.pem -keyout root-key.pem
  $ openssl req -config openssl-CA.cnf -new -nodes -out intermediate.csr -keyout intermediate-key.pem
  $ openssl req -config openssl-signing.cnf -new -nodes -outform pem -out signing.csr -keyout signing-key.pem

Sign the intermediate and the signing certificates:

  $ mkdir -p demoCA/newcerts
  $ touch demoCA/index.txt
  $ echo '01' > demoCA/serial
  $ openssl ca -config openssl-CA.cnf -in intermediate.csr -out intermediate.pem -keyfile root-key.pem -cert root.pem
  $ openssl ca -config openssl-signing.cnf -in signing.csr -out signing.pem -keyfile intermediate-key.pem -cert intermediate.pem

Create some PKCS7 structure, signed with the signing certificate. The chain certificates must be provided during the verification, or embedded into the signature. Let's embed the intermediate certificate. (If there had been more than one certificate in the chain, they would need to be simply placed in one .pem file):

  $ echo 'Hello, world!' > data.txt
  $ openssl smime -sign -in data.txt -inkey signing-crlsign-key.pem -signer signing-crlsign.pem -certfile intermediate.pem -nodetach > signed-crlsign.pkcs7

We have everything ready for verifying.

Verification with command-line OpenSSL tools

Attempt to verify it:

  $ openssl smime -verify -CAfile root.pem -in signed-crlsign.pkcs7 -out /dev/null -signer signing-crlsign.pem 
  Verification failure
  139944505955992:error:21075075:PKCS7 routines:PKCS7_verify:certificate verify error:pk7_smime.c:336:Verify error:unsupported certificate purpose

Attempt to verify, skipping extension checks:

  $ openssl smime -verify -CAfile root.pem -in signed-crlsign.pkcs7 -out /dev/null -signer signing-crlsign.pem -purpose any
  Verification successful

Attempt to verify it, specifying the OpenSSL "purpose" which the signing certificate satisfies:

  $ openssl smime -verify -CAfile root.pem -in signed-crlsign.pkcs7 -out /dev/null -signer signing-crlsign.pem -purpose crlsign
  Verification successful

Verification with the C OpenSSL API

The code below is "demo", any real application would have at least to check return codes of all system calls and free any allocated resources. But it shows how the verification of PKCS#7 structure (unexpectedly) fails, and succeeds after setting the "purpose" which the signing certificate satisfies:

    #include <stdlib.h>
    #include <stdio.h>
    #include <fcntl.h>              /* open() */

    #include <openssl/bio.h>
    #include <openssl/err.h>
    #include <openssl/ssl.h>
    #include <openssl/pkcs7.h>
    #include <openssl/safestack.h>
    #include <openssl/x509.h>
    #include <openssl/x509v3.h>     /* X509_PURPOSE_ANY */
    #include <openssl/x509_vfy.h>

    int main(int argc, char* argv[]) {
      X509_STORE *trusted_store;
      X509_STORE_CTX *ctx;
      STACK_OF(X509) *cert_chain;
      X509 *root, *intermediate, *signing;
      BIO *in;
      int purpose, ret;
      X509_VERIFY_PARAM *verify_params;
      PKCS7 *p7;
      FILE *fp;
      int fd;

      SSL_library_init();
      SSL_load_error_strings();

      fd = open("signed-ext-no-smimesign.pkcs7", O_RDONLY);
      in = BIO_new_fd(fd, BIO_NOCLOSE);
      p7 = SMIME_read_PKCS7(in, NULL);

      cert_chain = sk_X509_new_null();

      fp = fopen("root.pem", "r");
      root = PEM_read_X509(fp, NULL, NULL, NULL);
      sk_X509_push(cert_chain, root);

      fp = fopen("intermediate.pem", "r");
      intermediate = PEM_read_X509(fp, NULL, NULL, NULL);
      sk_X509_push(cert_chain, intermediate);

      trusted_store = X509_STORE_new();
      X509_STORE_add_cert(trusted_store, root);

      fp = fopen("signing-ext-no-smimesign.pem", "r");
      signing = PEM_read_X509(fp, NULL, NULL, NULL);

      ret = PKCS7_verify(p7, cert_chain, trusted_store, NULL, NULL, 0);
      printf("Verification without specifying params: %s\n", ret ? "OK" : "failure");

      /* Now set a suitable OpenSSL's "purpose", or disable its checking.
       * Note: since OpenSSL 1.1.0, we'd not need `ctx`, but could just use:
       * verify_params = X509_STORE_get0_param(trusted_store); */

      ctx = X509_STORE_CTX_new();
      X509_STORE_CTX_init(ctx, trusted_store, signing, cert_chain);
      verify_params = X509_STORE_CTX_get0_param(ctx);
      purpose = X509_PURPOSE_get_by_sname("crlsign"); /* Or: purpose = X509_PURPOSE_ANY */
      X509_VERIFY_PARAM_set_purpose(verify_params, purpose);
      X509_STORE_set1_param(trusted_store, verify_params);

      ret = PKCS7_verify(p7, cert_chain, trusted_store, NULL, NULL, 0);
      printf("Verification with 'crlsign' purpose: %s\n", ret ? "OK" : "failure");
      return 0;
    }

If our policy requires crlSign Key Usage, then we can use this example code. What if the policy needs some extension combination for which there is no suitable OpenSSL "purpose" - for example, CodeSigning Extended Key Usage? In that case it would not be possible to do it with just one call to PKCS7_verify, but the extensions need to be checked separately.

Conclusion

If you use OpenSSL for verifying PKCS#7 signatures, you should check whether either the following holds:

  1. Your signing certificate has Extended Key Usage extension, but no emailProtection bit.
  2. Your signing certificate has KeyUsage extension, but no digitalSignature OID.

If this is the case, then verification with OpenSSL fails even if your signature "should" verify correctly.

For checking signatures with command-line openssl smime -verify, a partial workaround can be adding option -purpose any. In this case OpenSSL will not check Extended Key Usage extensions at all. This can be acceptable or not by your verification policy.

-purpose option allows to check only for certain (although probably common) x509v3 extension combinations. OpenSSL defines a number of what it calls "purposes". If you need to check a combination which does not correspond to any of these "purposes", it must be done in a separate operation.

For checking signatures with C API PKCS7_verify(), the algorithm can be the following:

  1. Check X509v3 extensions of the signing certificate as required by your policy (example).
  2. Either set your verification parameters to X509_PURPOSE_ANY, or set a custom verification callback, which would ignore the "unsupported certificate purpose" error, i.e. X509_V_ERR_INVALID_PURPOSE.

ARP messages: Request, Reply, Probe, Announcement

ARP, as originally specified in RFC 826, had two message types: REQUEST and REPLY. But besides that one can hear of two "other" ARP messages: PROBE ANNOUNCEMENT, the latter also called "gratuitous ARP". They happen when a host send the reply message not in response to any preceding request; this has been formalized in RFC 5227.

One may find confusing that:

  • The two "new" messages do not extend the protocol. In particular, they do not introduce new ARP message types. How can we speak about "new messages"?
  • Source and destination MAC addresses are present in the MAC headers, but there are also "Source Hardware Address" and "Target Hardware Address" fields in the ARP body. Why this duplication?

These two questions are related. The table below shows the MAC headers and (relevant) ARP protocol fields for each of the four messages. The differences help to understand how each field is used.

Message Sent when the host... MAC headers ARP body
src MAC dst MAC message type Source Hardware Address Source Protocol Address Target Hardware Address Target Protocol Address
request wants to send an IP packet, but does not know the MAC address of the destination own broadcast REQUEST own own 0 destination's
reply receives an ARP request to an IP address this host owns own destination's, or broadcast1 REPLY own own requestor's requestor's
probe (RFC 5227, 2.1) configures a new IP address for an interface own broadcast REQUEST2 own 03 0 probed
announcement (RFC 5227, 2.3) after a probe, concludes that it will use the probed address own broadcast REQUEST2 own new own 0 new own

Notes to the table

  1. RFC 5227, section 2.6 explains why "Broadcast replies are not universally recommended, but may be appropriate in some cases".
  2. RFC 5227, section 3 notes that the type here could be REPLY as well, then continues to give reasons why REQUEST is recommended.
  3. Zero is specified here in order not to pollute ARP caches of other hosts in case when the probed address is already taken by someone else.

Possible implications of netmask mismatch

Summary: an IPv4 host with a netmask not matching that of the subnet to which the interface is connected likely builds incorrect routing tables, misses some broadcasts, may incorrectly identify broadcasts as unicasts, and unintentionally broadcast to own subnet.

What does it mean - a "wrong netmask"?

The netmask (in IPv4 terminology) and network prefix (in IPv6 terminology) can be associated with an IP subnet, and correspondingly with a network interface. This post handles IPv4 only, so the term "netmask" will be used. Together with own IP address, the netmask determines whether another IP address belongs to the same IP subnet as the NIC.

Good, so how is this knowledge used?

Processing of multicast packets is not affected by the netmask, thus multicast would not be mentioned here further. For unicast and broadcast, the netmask is consulted in three different situations, listed in the following sections.

Case 1. Netmask can be used as input for constructing the routing table.

The routing system normally automatically creates routes to the subnet to which each network interface belongs. I.e. for each network interface I with address AI and netmask M, the host calculates the subnet of this interface as SI = AI & M. Outgoing packets to any address AP such that AP & M = SI would be emitted from the interface I.

While this behavior is typical, nothing mandates hosts to create such routing table entry. For example, if a host has two interfaces on the same subnet, then obviously some more information is needed to decide, which of the interfaces shall emit the packets destined to their common subnet. Another example is a firewall with more restrictive forwarding policy than just "put every packet for subnet SI to interface I".

Case 2. Netmask is used to determine whether an arrived packet is a (directed) broadcast to a subnet of some local interface.

After the routing is covered, we can limit our further investigation to only:

  • Unicast packets, destined to "this host" (i.e. one of its interfaces).
  • Directed broadcast packets to "this network". There can be more than one "this" network if the host has more than one network interface (the host can be or not be a router).

Really,

  • Directed broadcast to a network not in "our network" set is handled as any other packet subject to possible routing.
  • Local broadcast packets are obviously not affected by the netmask setting.

For hosts which are not routers RFC922 defines handling of broadcast packets in a simple way:

In the absence of broadcasting, a host determines if it is the
recipient of a datagram by matching the destination address against
all of its IP addresses.  With broadcasting, a host must compare the
destination address not only against the host's addresses, but also
against the possible broadcast addresses for that host.

Now imagine that an interface of some host has netmask, which does not match one of the subnet this interface is connected to. This is what happens.

Netmask of the interface is shorter

  • Interface misconfigured with a shorter netmask fails to process broadcasts: they are understood as unicasts by such host.

  • Example: in /24 network 1.1.1.0, a packet to a broadcast address 1.1.1.255 will not be recognized as broadcast by a misconfigured interface 1.1.1.1/16.

That is, unless the network has all bits in the netmask difference equal to 1.

  • Example: in /24 network 1.1.255.0, a packet to a broadcast address 1.1.255.255 will be, by a coincidence, correctly accepted as broadcast by a misconfigured interface 1.1.1.1/16.

  • Broadcast packet which is incorrectly understood as unicast by a misconfigured interface can also happen to bear the destination address of this interface itself.

  • Example: in /16 network 1.1.0.0, a broadcast packet to 1.1.255.255 will be received as unicast by a misconfigured interface 1.1.255.255/8.

  • Additionally, the host may attempt to send a unicast packet which would appear as a valid broadcast on the network.

  • Example: in /16 network 1.1.0.0, a host misconfigured as 1.1.1.1/8 sends a unicast to destination address 1.1.255.255. It appears as broadcast on this network. In fact, there can be no host with address 1.1.255.255 on this network (as it is a broadcast address), so nobody answers ARP query and the host will not be able to send such packet.

Netmask of the interface is longer

  • Interface misconfigured with a longer netmask fails to process broadcasts as well: it will consider them not belonging to own subnet.

  • Example: in /8 network 1.0.0.0, a packet to a broadcast address 1.255.255.255 will not be received by a misconfigured interface 1.1.1.1/16.

Again, unless the address of the misconfigured interface happens to have all bits in the netmask difference being equal to 1.

  • Example: in that same network, that same broadcast packet will be accepted just fine by a misconfigured interface 1.255.1.1/16.

For hosts which are routers, RFC922 adds the clause concerning for broadcast packets destined to other interface than the one on which the packet is received:

...if the datagram is addressed to a hardware network
to which the gateway is connected, it should be sent as a
(data link layer) broadcast on that network.  Again, the
gateway should consider itself a destination of the datagram.

In this case, the netmask of the router's interface, where the packet has been received, is not relevant - packet should be processed anyway. Instead, the packet's destination interface configuration is the basis for the decision. Correspondingly, mismatch between the netmask of the destination interface and the sender's expectation of the netmask leads to same consequences as listed above for non-forwarding hosts.

Have we covered all cases? Three independent factors affect the outcome:

  • Is the receiver's netmask shorter or longer than of the subnet it is connected to?
  • Are the bits from the difference in netmask lengths all equal to one?
  • Is the packet unicast or (directed) broadcast?

All 8 possibilities have been considered above.

Case 3. Netmask is used for setting destination address of outgoing broadcast packets.

When a host wishes to send a broadcast packet from certain interface, it sets the destination address to that of the interface and puts 1 to all bits which are zeros in the netmask. Correspondingly:

Netmask of the network interface is shorter

Host with shorter netmask will set too many bits to 1. On the local subnet, these packets will be recognized as belonging to other subnet by other hosts and consequently not processed.

  • Example: in /24 network 1.1.1.0/24, host misconfigured as 1.1.1.1/16 sends what it thinks a "broadcast" with destination 1.1.255.255. (It will be sent as link-layer broadcast.) No other host on this network accepts it.

Unless if the network has all bits in the netmask difference being equal to one.

  • Example: in /24 network 1.1.255.0/24, a misconfigured host 1.1.255.1/16 sends a "broadcast" packet to 1.1.255.255, which happens to be a valid broadcast on this network.

Netmask of the network interface is longer

Host with longer netmask will not set enough bits to 1. The packets sent as broadcast will be recognized as unicast by other hosts on this subnet.

  • Example: in /8 network 1.0.0.0/8, a host misconfigured as 1.1.1.1/16 sends what it thinks to be a broadcast packet to 1.1.255.255. It appears as valid unicast on this subnet. If there is a host with address 1.1.255.255, this host will accept this packet. (Besides probably unexpected IP content, the host may also notice that the layer 2 address of this packet was a layer 2 broadcast.)

Naturally, these cases are "reversed" repetition of the cases for the receiving hosts.

Conclusion

Netmask is normally (but not necessarily) used as input for the routing table construction. If used, then a wrong interface netmask makes possible the following routing failures:

  • Too long netmask: the host will have no route for some packets, actually belonging to a subnet of this interface. Attempt to send packet to a host outside the too long misconfigured netmask but inside the correct netmask of the net results in ICMP error "Destination net unreachable". If there is a default outgoing interface, the host will not generate the error, but send the packets to the default interface instead of the interface of this subnet.
  • Too short netmask: the host may attempt to send to the interface packets, which would not be received by any host of the connected subnet. This attempt probably fails, because no host answers the ARP request. This results in ICMP error "Destination host unreachable".

In IPv4, directed broadcast packets are sent and received utilizing the netmask information. Directed broadcast is a marginal case; such packets are rarely used and dropped by most routers as per RFC2644. But if directed broadcasts are used, then mismatched netmask results in any of:

  • failure to receive broadcast packets
  • failure to forward broadcast packets by routers
  • forwarding broadcast packets, destined to own network
  • accepting unicast packets, destined to some host, as broadcasts
  • accepting broadcast packets as unicast.

Support for elliptic curves by jarsigner

Summary: Support for cryptography features by jarsigner depends on available Java crypto providers.

Suppose you are defining a PKI profile. You naturally want to use the stronger algorithms with better performance, which (as of year 2014) means elliptic curves. Besides bit strength and performance, you want to be sure that the curve is supported by your software. If the latter includes jarsigner, you'll be surprised to find that Oracle documentation seems to not mention at all, which elliptic curves does jarsigner support.

Signing a JAR means adding digests of the JAR entries to the manifest file (META-INF/*.MF), adding digest of the latter to the *manifest signature file* (META-INF/*.EC, in caseEllipticC`urve is used), and then creating the JAR signature block file. The last step involves two operations:

  1. calculating a digest over the manifest signature file;
  2. signing (i.e. encrypting with the private key) that digest.

Jarsigner has an option -sigalg, which is supposed to specify the two algorithms used in these two steps. (There is also -digestalg' option, but it is not used for the signature block file; it defines the algorithm used in the two initial steps.) Well, this option is irrelevant for our question: the curve is in fact defined by the provided private key. So jarsigner will either do the job or choke on the key which comes from an unsupported curve.

A curve may "not work" because it is unknown to jarsigner itself, or to an underlying crypto provider. (The latter case was a reason to a bug 1006776, a setup where only three curves actually worked.) Attempt to sign the JAR with jarsigner using a non-supported private key would result in a not very helpful error message:

certificate exception: java.io.IOException: subject key, Could not create EC public key

To be on the safe side, it's best to test. For curves, supported by OpenSSL, the test can be done by creating the keypair on each curve and attempting the signing:

  • Create the list of curves with
openssl ecparam -list_curves
  • remove manually some extra words openssl puts there in the beginning
  • and feed it to the stdin:
  #!/bin/bash
  # Test, which OpenSSL-supported elliptic curves from the list are supported also by jarsigner.
  result="supported- curves.txt"
  source_data="data.txt"
  jar="data.jar"
  key="key.pem"
  cert="cert.pem"
  pfx="keystore.pfx"
  key_alias="foo"         # Identificator of the key in the keystore
  storepass="123456"      # jarsigner requires some

  touch $source_data
  while read curve; do
    # Generate an ECDSA private key for the selected curve:
    openssl ecparam -name $curve -genkey -out $key
    # Generate the certificate for the key; give some dummy subject:
    openssl req -new -x509 -nodes -key $key -out $cert -subj /CN=foo
    # Wrap key+cert in a PKCS12, so that jarsigner can use it:
    openssl pkcs12 -export -in $cert -inkey $key -passout pass:$storepass -out $pfx -name $key_alias
    # Create a fresh jar and attempt to sign it
    jar cf $jar $source_data
    jarsigner -keystore $pfx -storetype PKCS12 -storepass $storepass $jar $key_alias
    [ $? -eq 0 ] && echo $curve >> $result
  done
  rm $source_data $key $cert $pfx $jar

And enjoy the list in supported-curves.txt.

Summary:

  • support of elliptic curves by jarsigner depends on jarsigner itself and on the used JRE.
  • There is no command-line option to list all supported curves.
  • For a particular system, support for curves known by OpenSSL can be easily tested.

JAR signature block file format

Summary: this post explains the content of the JAR signature block file - that is, the file META-INF/*.RSA, META-INF/*.DSA, META-INF/*.EC or SIG-* inside the JAR.

Oracle does not document it

Signed JAR file contains the following additions over a non-signed JAR:

  1. Checksums over the JAR content, stored in text files META-INF/MANIFEST.MF and META-INF/*.SF
  2. The actual cryptographic signature (created with the private key of the signer) over the checksums in a binary signature block file.

Surprisingly, format of the latter does not seem to be documented by Oracle. JAR file specification provides only a useful knowledge that "These are binary files not intended to be interpreted by humans".

Here, the content of this "signature block file" is explained. We show how it can be created and verified with non-Java tool: OpenSSL.

Create a sample signature block file

For our investigation, generate such file by signing some data with jarsigner:

  • Make an RSA private key (and store it unencrypted), corresponding self-signed certificate, pack them in a format jarsigner understands:
openssl genrsa -out key.pem
openssl req -x509 -new -key key.pem -out cert.pem -subj '/CN=foo'
openssl pkcs12 -export -in cert.pem -inkey key.pem -out keystore.pfx -passout pass:123456 -name SEC_PAD
  • Create the data, jar it, sign the JAR, and unpack the "META-INF" directory:
echo 'Hello, world!' > data
jar cf data.jar data
jarsigner -keystore keystore.pfx -storetype PKCS12 -storepass 123456 data.jar SEC_PAD
unzip data.jar META-INF/*

The "signature block file" is META-INF/SEC_PAD.RSA.

What does this block contain

The file appears to be a DER-encoded ASN.1 PKCS#7 data structure. DER-encoded ASN.1 file can be examined with asn1parse subcommand of the OpenSSL:

openssl asn1parse -in META-INF/SEC_PAD.RSA -inform der -i > jarsigner.txt

For more verbosity, you may use some ASN.1 decoder such as one at lapo.it.

You'll see that the two top-level components are:

  • The certificate.
  • 256-byte RSA signature.

You can extract the signature bytes from the binary data and verify (=decrypt with the public key) them with openssl rsautl. That includes some "low-level" operations and brings you one more step down to understanding the file's content. A simple "high-level" verification command, not involving manual byte manipulation, would be:

openssl cms -verify -noverify -content META-INF/SEC_PAD.SF -in META-INF/SEC_PAD.RSA -inform der

This command tells: "Check that the CMS structure in META-INF/SEC_PAD.RSA is really a signature of META-INF/SEC_PAD.SF; do not attempt to validate the certificate". Congratulations, we have verified the JAR signature without Java tools.

Creating the signature block file with OpenSSL

For this example, we created the signature block file with jarsigner. There are at least two OpenSSL commands which can produce similar structures: openssl cms and openssl smime, with the options given below:

openssl cms -sign -binary -noattr -in META-INF/SEC_PAD.SF -outform der -out openssl-cms.der -signer cert.pem -inkey key.pem -md sha256
openssl smime -sign -noattr -in META-INF/SEC_PAD.SF -outform der -out openssl-smime.der -signer cert.pem -inkey key.pem -md sha256

Let's decode the created files and compare them to what has been produced with jarsigner:

openssl asn1parse -inform der -in openssl-cms.der -i > openssl-cms.txt
openssl asn1parse -inform der -in openssl-smime.der -i > openssl-smime.txt

Testing the "DIY signature"

Underlying ASN.1 structures are, in both cms and smime cases, very close but not identical to those made by jarsigner. As the format of the signature block file is not specified, we can only do tests to have some ground to say that "it works". Just replace the original signature block file with our signature created by OpenSSL:

cp openssl-cms.der META-INF/SEC_PAD.RSA
zip -u data.jar META-INF/SEC_PAD.RSA
jarsigner -verify -keystore keystore.pfx -storetype PKCS12 -storepass 123456 data.jar SEC_PAD

Lucky strike: a signature produced by openssl cms is recognized by jarsigner (that is, at least "it worked for me").

Note that the data which is signed is SEC_PAD.SF, and it was itself created by jarsigner. If not using the latter, you'll need to produce that file in some way.

What's the use for this knowledge?

Besides better understanding your data, one can think of at least two reasons to sign JARs with non-native tools. Both are somewhat untypical, but not completely irrelevant:

  1. The signature must be produced in a system, where native Java tools are not available. Such system must have access to private key, and security administrators may like the idea of not having such overbloated software as JRE in a tightly controlled environment.

  2. The signature must be produced or verified in a system, where available tools do not support the required signature algorithm. Examples "why" include compliance with regulations or compatibility with legacy systems. There are systems where testing which elliptic curves are supported by jarsigner reveals just three curves (which is not much).

Summary (again)

  • JAR signature block file is a DER-encoded PKCS#7 structure.
  • Its exact content can be viewed with any ASN.1 decoder, e.g. with openssl asn1parse.
  • OpenSSL can verify signatures in signature block files and create almost identical structures, which have been reported to be accepted by Java tools.

Contents © 2017 Konstantin Shemyak - Powered by Nikola