Academic Profiles
google scholar google scholar [link]
or orcid [link]
dblp dblp [link]
Academic Journey Teaching & Research Positions Research Positions Education Academic Services Publications as Lead Author

Random Sampling

Data Summarisation Sparse Prefix Sums Multiobjective Shortest Path

Publications as Co-Author

String Processing

Join Sampling

Image Similarity Search

Michael Shekelyan
Dr. Michael Shekelyan, Computer Science Researcher


I was born in Moscow, but I grew up in Hamburg and then later moved to Munich where I studied and worked with Prof. Matthias Schubert and Prof. Hans-Peter Kriegel's database group (University of Munich). I did my PhD in Italy under the supervision of Prof. Johann Gamper (Libera Università di Bolzano) and then went to the UK for postdoctoral research under Prof. Graham Cormode (University of Warwick) & Dr. Grigorios Loukides (King's College London) followed by an appointment as Lecturer in Computer Science (Queen Mary University of London).


My research focuses primarily on algorithms, data structures and summaries to manage very large or sensitive data. The overall goal is to build a full data pipeline that feeds end users with easily interpretable facts which provide novel insights and aid decision making processes. Reducing the data complexity either through sampling or summarisation plays a crucial role to support exploratory interactions with the data that involve a lot of probing, while still providing an intuitive approximation model of the data. Sensitive data calls for privacy-preserving techniques such as differential privacy & federated learning to facilitate data sharing between organisations whilst minimising risks to the privacy of patients, users, customers and employees whose personal information is collected.

Differential Privacy

How to select the top items based on sensitive scores in a privacy-preserving manner:


How to directly jump along the selected positions of a simple random sample storing only a handful of values
Python code for sampling iterator
: How to collect a (weighted) random sample over a huge table that is only available as a set of smaller linked tables that need to be joined together (requiring just one pass over most troublesome table): Shany came up with the really cool idea of posing join sampling via probabilistic graphical models:

Multidimensional Data Summaries

How to build tiny data models that empirically tend to be good at approximating the number of points in a rectangular range
DigitHist summary of spatial data
(zoomed in on UK and Germany)
: How to build compact data models that are theoretically guaranteed to be good at approximating the number of points in a rectangular range (not just asymptotically!): How to approximate arbitrary rectangles with a few pre-selected rectangles :

Query Processing

How to compute sums over sub-tables for a very large table of numbers, most of which are equal to zero : How to find all paths between two network nodes that could be best for some user preference
Optimality for some linear scalarization

Websites How do we turn computer "science" into computer science? [link] How do we fix peer-review? [link] How do we get fewer papers with more quality? [link]

London Nightvoucher Project

Currently just an idea born out of my own experiences living in London. I am still learning more about the intricacies involved and potential stumbling blocks ahead, but let me know if you are in any way interested in making it easier to make dedicated donations towards accommodation for people sleeping rough. More details can be found on the project website [].

Note: The views and opinions expressed on this site are those of the authors and do not necessarily reflect the official policy or position of their employers. [back]