Simon Wiedemann, Ph.D.

Simon Wiedemann, Ph.D.

United States
725 followers 500+ connections

About

I find lots of joy in trying to solve very hard problems. I take a first-principled…

Activity

Join now to see all activity

Experience

  • HYPOTHETIC Graphic

    HYPOTHETIC

    Los Angeles, California, United States

  • -

    Vancouver, British Columbia, Canada

  • -

  • -

    Berlin Area, Germany

Education

  • Technische Universität Berlin Graphic

    Technische Universität Berlin

    -

    Research focus was on compact and efficient representations of Deep Neural Networks.

  • -

  • -

  • -

  • -

Publications

Patents

  • Improved concept for a representation of neural network parameters

    Issued WO2021209469A1

    Apparatus for generating a NN representation, configured to quantize an NN parameter onto a quantized value by determining a quantization parameter and a quantization value for the NN parameter so that from the quantization parameter, there is derivable a multiplier and a bit shift number. Additionally, the determining of the quantization parameter and the quantization value for the NN parameter is performed so that the quantized value of the NN parameter corresponds to a product between the…

    Apparatus for generating a NN representation, configured to quantize an NN parameter onto a quantized value by determining a quantization parameter and a quantization value for the NN parameter so that from the quantization parameter, there is derivable a multiplier and a bit shift number. Additionally, the determining of the quantization parameter and the quantization value for the NN parameter is performed so that the quantized value of the NN parameter corresponds to a product between the quantization value and a factor, which depends on the multiplier, bit-shifted by a number of bits which depends on the bit shift number.

    See patent
  • Neural network representation formats

    Issued EU WO2021064013A2

    Data stream (45) having a representation of a neural network (10) encoded thereinto, the data stream (45) comprising serialization parameter (102) indicating a coding order (104) at which neural network parameters (32), which define neuron interconnections (22, 24) of the neural network (10), are encoded into the data stream (45).

    See patent
  • Pruning and/or quantizing machine learning predictors

    Issued EU WO2020260656A1

    Pruning and/or quantizing a machine learning predictor or, in other words, a machine learning model such as a neural network is rendered more efficient if the pruning and/or quantizing is performed using relevance scores which are determined for portions of the machine learning predictor on the basis of an activation of the portions of the machine learning predictor manifesting itself in one or more inferences performed by the machine learning (ML) predictor.

    See patent
  • Methods and apparatuses for compressing parameters of neural networks

    Issued EU WO2020188004A1

    An encoder for encoding weight parameters of a neural network is described. This encoder is configured to obtain a plurality of weight parameters of the neural network, to encode the weight parameters of the neural network using a context-dependent arithmetic coding, to select a context for an encoding of a weight parameter, or for an encoding of a syntax element of a number representation of the weight parameter, in dependence on one or more previously encoded weight parameters and/or in…

    An encoder for encoding weight parameters of a neural network is described. This encoder is configured to obtain a plurality of weight parameters of the neural network, to encode the weight parameters of the neural network using a context-dependent arithmetic coding, to select a context for an encoding of a weight parameter, or for an encoding of a syntax element of a number representation of the weight parameter, in dependence on one or more previously encoded weight parameters and/or in dependence on one or more previously encoded syntax elements of a number representation of one or more weight parameters, and to encode the weight parameter, or a syntax element of the weight parameter, using the selected context. Corresponding decoder, quantizer, methods and computer programs are also described.

    See patent
  • Concepts for distributed learning of neural networks and/or transmission of parameterization updates therefor

    Issued DE WO2019219846A1

    The present application is concerned with several aspects of improving the efficiency of distributed learning.

    See patent
  • Neural network representation

    Issued DE WO2019086104A1

    Efficient neural network representations, their derivations and their processing such as their usage in performing a prediction using the neural network represented by such representation are described.

    See patent

Projects

Honors & Awards

  • Best paper award

    ICML 2019 Workshop (ODML-CDNNR)

  • Nominated for best master thesis award

    TU Berlin

  • TASSEP scholarship

    TU Berlin

    Study abroad scholarship

  • Nominated for the German Academic Scholarship Foundation

    TU Berlin

    The German Academic Scholarship Foundation is Germany's largest, oldest and most prestigious scholarship foundation

  • Black belt

    WAKO federation of Kickboxing

Languages

  • English

    Full professional proficiency

  • German

    Native or bilingual proficiency

  • Spanish

    Native or bilingual proficiency

  • Catalan

    Full professional proficiency

More activity by Simon

View Simon’s full profile

  • See who you know in common
  • Get introduced
  • Contact Simon directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Add new skills with these courses