Spleenlab GmbH reposted this
Impressive how #LSTMs have evolved ... Currently, we are diving deeper into #xLSTM and #LSTM models by NXAI and Sepp Hochreiter as Potential Replacement for #Transformer based #Approaches ... Why? 1. #xLSTM may achieve much more Safety due to its longer and variable Memory compared to transformers ... we talk about #TemporalConsistency ⏲ ⏳ ⌛ 2. #xLSTM are may much more suited for #embeddedDeployment, also potentially faster and may increasing the safety (talking about basics #SafetySWEngineering #ASIL) 💾 ⌨ 3. Research is also Fun and our intrinsic motivation at Spleenlab GmbH 🎉 Our Research Engineer Victor Kallenbach tried out the #Vision #xLSTM as Backbone for our #DepthFoundationModel and compared it to a #SOTA #VisionTransformer: Both models were trained in the exact same way. Results are quite promising as both models perform similar in a first glimpse (#RMSE drop and even qualitative results). Please see the results attached as a first shot ... For me this is very promising ... stay tuned, we will update you soon #References: https://lnkd.in/dVcqvDrU (Vision xLSTM) https://lnkd.in/dty4_q8g (Vision Transformer DeiT)
Speaking of safety: I Car Laws http://www.inf.fu-berlin.de/inst/ag-ki/rojas_home/documents/tutorials/I-Car-Laws.pdf
Very nice work. Happy to see LSTMs this way.
Great work. Have you guys tested xLSTM for real-time video stream classification ?.
Great steps! Looking forward to seeing more
MSc in Computer & Systems Engineering | Experienced in Python Programming | Machine Learning & Deep Learning | Data Science | Explainable AI | LLM & RAG | Willing to Relocate
1wInteresting! But I think Transformers give better options when it comes to enhancing transparency and explanation of predictions (#XAI). Specifically, the attention weights offer some degree of inherent transparency which could be further manipulated through methods such as #Perturbation and #Attribution.