Research Summary
Following are the main problems on which I have worked so far. Feel free to email me and get a copy of the latest version of any of the papers.
1.
Certificate Revocation
My research in this field started with a survey and comparative study [A14] of
various certificate revocation techniques proposed so far. I focused especially
on CRLs and Online methods of revocation and offered
various guidelines on the right selection of revocation solution for the target
PKI environment.
In my next work, I proposed an alternative [A9] to Rivest's short lived certificates. My scheme offered
advantages over short lived certificates in terms of (a) CA key security, (b)
ease of delegation and outsourcing of revocation work, (c) network load, (d) CA
server storage requirement, and, (e) CA computational requirements. These
advantages were gained at the expense of slightly increasing the certificate
verification cost.
Following this work, I proposed an extension [A5] to Micali's CRS. CRS was impractical in transactions where
real time or near real time revocation information is desired. In his recent
revision of the system, Micali talks of 1 million
links long hash chains in order to provide revocation information at most 15
seconds old. Although certainly feasible, this kind of setting requires half a
million hash function evaluations on an average for each
certificate verification. This puts unacceptable burden on the verifier.
I sketched an alternate solution using OTS along with hash chains extending CRS
to enable the CA to answer a pre-decided number of queries asking for real time
revocation information.
My most important contribution in this area came in my
latest work [A2] wherein, I proposed a new certificate revocation scheme. The
main design goal was to minimize the overall system cost by striking the right
balance between the two cost components involved in a
revocation solution: CA to directory communication cost (CDCC) and query
answering cost (QC). The right balance is something that today's two most
elegant revocation solutions lack: CRS having unbeatable QC but having CDCC as
bottleneck and CRT having unbeatable CDCC but having QC as the bottleneck. The
basic idea is to divide the system into a number of partitions and then handle
each partition independently, the number of partitions being the key parameter
in the scheme. At the end of each day (or time slot), a partition would either
expire or be renewed by the CA depending upon whether any certificate in that
partition got revoked that day or not. Partition renewal is done by exposing a
link of the hash chain associated with that partition. My performance numbers
demonstrate that the system has an edge over today's major revocation system.
In this work, apart from the new scheme, I also proposed an improvement to the
Crypto'98 revocation technique by Aiello et al. I relaxed their impractical
assumption that all the certificates in the PKI were created on the same day
and no more certificates will subsequently be added. Further details about this
work are available here.
2. Hash Chain Construction
My first work in this direction was a technique for re-initializing hash
chains [S4]. Several systems based on hash chains currently suffer from a
common limitation: hash chains have a finite number of links which when
exhausted requires the system to be re-initialized. Further, the length of hash
chains cannot be made very large due to increase in verification and link
computation costs (though link computation costs have now decreased thanks to
traversal techniques). In this paper, I proposed a new hash chain construction
called Re-initializable Hash Chains (RHC). A RHC has
the property that if its links are exhausted, it can be securely re-initialized
in a non-repudiable manner to result in another RHC. RHCs are as efficient as regular hash chains and use OTS
key components as hash chain links during the re-initialization phase.
Recently, an improvement to this construction was proposed by Zhao and Li
(available at http://eprint.iacr.org/2005/011).
Present hash chain traversal
techniques require that the intermediate hash chain links be stored secretly on
a trusted storage. This may be an unrealistic assumption in scenarios like Lamport's One Time Password system where no such storage is
available to the chain generator. In my latest work [A3], I proposed an
interesting and slightly twisted construction of hash chains called w-chains
and a method to convert any regular hash chain traversal technique into a
w-chain traversal technique. When a w-chain is used in conjunction with these
converted traversal techniques, the intermediate chain links may be stored on a
public and non-trusted storage. This storage may potentially be the hash chain
verifier itself. I was able to achieve provable security by replacing the hash
function with a MAC function. This construction is also very useful in
applications like micropayments and server assisted
signatures etc. Further details about this work are available here.
3.
Securing the Address Resolution Protocol (ARP)
ARP cache poisoning is a long standing problem lacking a satisfactory solution.
The key lies in identifying the need of serious cryptography to solve this
problem. My research in this field started with the study of solutions proposed
so far. My first work was breaking and repairing the recent proposed Secure
Address Resolution Protocol (S-ARP). In S-ARP, a certificate was issued to the
each machine on the network by a trusted machine called AKD. Each machine would
then simply use digital signatures to sign each ARP reply thus authenticating
the Mac address being supplied. Although simple, S-ARP was found to have
serious errors in its key management which led to the complete compromise of
the protocol.
In my next work, I proposed a new solution trying to achieve
security and efficiency at the same time. The approach taken by S-ARP was
inefficient due to the overhead of computing a signature for every ARP reply.
Building upon the fact that an <IP, Mac> mapping is rarely changed, I
proposed a new cryptographic technique combining digital signatures and hash
chains. Each machine would sign its <IP, Mac> mapping along with a hash
chain tip and the current time. The links of the hash chain would be released
periodically (every 20 min since 20 min is the cache entry validity time) by
the host. Now the ARP reply would be the signature along with the data signed
and the latest hash chain link released. Thus, the signature ensures the
authenticity of the mapping while the hash chain link ensures its recency. This means, as far as the mapping doesn't change,
a single signature is sufficient for n*20 min where n is the number of links in
the hash
chain. n may be made very large using traversal
techniques.
Following this work, I tried to remove public key cryptography
altogether [A4]. The basic motivation is that ARP is just a low level system
component doing 'no user work' expect ensuring correct functioning. Hence, its
resources usage should be minimized. Hash functions or symmetric key
cryptography will be a lot more efficient especially since many microprocessors
are now coming with DES/AES implemented in hardware. The basic idea is to
create a Merkle tree having <IP, Mac> mappings
of the different machines on the network as the tree leaves. Each machine would
store the tree authentication path for the leaf representing its own mapping.
Machines would also store the tree root. ARP reply would consist of the mapping
along with its authentication path. The task of updating the Merkle
tree (for changing an existing mapping or adding a new machine onto the
network) and communicating changes is managed by a trusted node on the network.
Two different types of broadcast authentication protocols are simultaneously
employed to ensure that a machines gets the changes
even if it was down at the time changes were made. Further details about this
work are available here.
4. Non-Trapdoor Hash Function aided or
based Signature Schemes
My first work in this direction was the construction of a provably secure
bulk signature generation scheme [S1]. My basic motivation
were many heavily used commercial applications which require a party
(possibly a server) to sign messages at a very high rate (possibly to answer
user queries). First example is the recent Yahoo! DomainKeys
technology in which the email server digitally signs all outgoing emails on
behalf of the users with its own private key. DomainKeys
is already deployed by Google Gmail
and Yahoo is expected to follow soon. Other examples include payment systems,
e-banking/e-commerce, signing of routing messages and OCSP etc. The signature
scheme proposed is a simple Merkle tree based scheme
in which a set of messages are signed with just a single signature generation
and a number of hash function computation to significantly reduce the
computational requirements of the system. With this technique, a system which
was earlier able to handle only say 20 signature generations per seconds will
be able to handle approximately 50,000 signature generations per second. The
downside is the slight increase in signature length and response time. Although
simple, the scheme improves the signing rate dramatically and has wide
commercial applicability. Further details about this work are available here.
Presently, I am working towards the construction of a full
fledged (i.e. with no limits on the number of signatures that can generated)
provably secure signature scheme using (non-trapdoor) hash functions only [U1].
Such a scheme was designed by Merkle in 1987 using
infinitely growing one time signature trees. However, this scheme was of
theoretical interest only due to high computation, storage and signature size.
My approach is based on an infinitely growing set of interconnected Merkle trees. As we generate more and more signatures, new Merkle trees are simultaneously constructed and added to
the set. I provide a computation and storage tradeoff with which, it is
possible to compute a signature with as low as 300 hash function evaluations,
all of which may be done offline. Apart from being efficient, such signatures
do not require any number theoretic assumptions and are secure even against
quantum computers.
5.
Authentication Protocols based on Hash Functions
One of the problems I considered is to construct an
authentication protocol using hash functions only such that it is secure
against both active and passive attacks as well as server (or password file)
compromise. My first work in this direction [A12] was an improvement to the Lamport's one time password scheme. My protocol was more
efficient and practical than Lamport's although it
was still vulnerable to active attacks. The protocol was especially suitable
for mobile devices.
Improving upon my previous work, I designed a new protocol
[S2]. By deriving security from previous protocol runs, I was able to resist
active attacks also. The protocol worked on the principle of induction, i.e. if
the previous steps were secure, then so would be the current step. The protocol
still had a few weaknesses like it was vulnerable to offline dictionary attacks
(which is irresistible in a non-PKC setting).
Recently, I
proposed an improvement to the Mitchell's authentications protocol being used at
MobileVCE. Mitchell's protocol is vulnerable to a serious DoS attack the result
of which could be a permanent loss of synchronization between the user and the
host. I used one time signature chains coupled with an offline transcript method
to support re-synchronization. All this was done without any loss of efficiency.
6. Miscellaneous
Some other problems I worked on
include: designing an elegant protocol to counter online dictionary attacks,
construction a provably secure server assisted signature scheme using OTS
chains, using origin server authentication to fight spam etc.
Currently,
I am exploring the area of DNS cache poisoning. The concept of Merkle tree
authentication fits well into this problem. A mapping of a domain name to an IP
address may be authenticated given the Merkle tree root and the authentication
path nodes of the mapping. The tree creation will be decentralized with each
domain (or sub-domain) managing its own sub-tree and just communicating the
sub-tree root to the next higher level. In this way, a master Merkle tree will
be created whose root will be made available to the browsers. Hence the process
of mapping verification will become very light weight.
References
A1 to A15: See the list of accepted papers
S1 to S4: See the list of submitted papers
U1 to U2: See the list of papers under preparation
Complete list of papers available here.
Go back to the apply
home page