Abstract: MPI has long been considered the de facto standard for parallel programming. One of the primary strengths of MPI is its continuously evolving nature that allows it to absorb and incorporate the best practices in parallel computing in a standard and portable form. The MPI Forum has recently announced the MPI-3 standard and is working on the MPI-4 standard to extend traditional message passing into more dynamic, one-sided and fault tolerant communication capabilities. Nevertheless, given the disruptive architectural trends for Exascale computing, there is room for more. In this talk, I’ll first describe some of the capabilities that have been added in the recent MPI-3 standard and those that are being considered for the upcoming MPI-4 standard. Next I’ll describe some research efforts to extend MPI to work in massively multithreaded and heterogeneous environments for highly dynamic and irregular applications.