Publications

2023

2022

October

UL2-tiny-nl6 for Finnish

Finnish-NLP

UL2-small-nl16 for Finnish

Finnish-NLP

UL2-base-nl36 for Finnish

Finnish-NLP

ArabicT5-Large

Alrowili & Shanker

Smol Imagen

Jenks

Offline RL With Realistic Datasets: Heteroskedasticity and Support Constraints

Singh et al.

Learning Probabilistic Models from Generator Latent Spaces with Hat EBM

Hill et al.

Pruning's Effect on Generalization Through the Lens of Training and Regularization

Jin et al.

ESB: A Benchmark For Multi-Domain End-to-End Speech Recognition

Gandhi, von Platen & Rush

Legal-Tech Open Diaries: Lesson learned on how to develop and deploy light-weight models in the era of humongous Language Models

Maroudas et al.

Instruction-Following Agents with Jointly Pre-Trained Vision-Language Models

Liu et al.

MetaFormer Baselines for Vision

Yu et al.

Do Language Models Understand Measurements?

Park, Ryu & Choi

Bioberturk: Exploring Turkish Biomedical Language Model Development Strategies in Low Resource Setting

Türkmen et al.

A Comprehensive Analysis of Subword Tokenizers for Morphologically Rich Languages

Erkaya

Optimizing Hierarchical Image VAEs for Sample Quality

Luhman & Luhman

MTet: Multi-domain Translation for English and Vietnamese

Ngo et al.

Integrative dissection of gene regulatory elements at base resolution

Chen et al.

EleutherAI: Going Beyond “Open Science” to “Science in the Open”

Phang et al.

IndoLib: A Natural Language Processing Toolkit for Low-Resource South Asian Languages

Timalsina

Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials

Kumar et al.

ConserWeightive Behavioral Cloning for Reliable Offline Reinforcement Learning

Nguyen, Zheng & Grover

An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification

Chalkidis et al.

Population-Based Reinforcement Learning for Combinatorial Optimization

Grinsztajn, Furelos-Blanco & Barrett

Temporally Consistent Video Transformer for Long-Term Video Prediction

Yan et al.

2021

September

Revisiting transposed convolutions for interpreting raw waveform sound event recognition CNNs by sonification

Yadav & Foster

Icelandic ConvBERT-Small

Daðason

Training on Test Data with Bayesian Adaptation for Covariate Shift

Zhou & Levine

O-JMeSH: creating a bilingual English-Japanese controlled vocabulary of MeSH UIDs through machine translation and mutual information

Soares et al.

JAX vs PyTorch: A simple transformer benchmark

Nolan

The Challenge of Appearance-Free Object Tracking with Feedforward Neural Networks

Malik et al.

AraT5: Text-to-Text Transformers for Arabic Language Understanding and Generation

Nagoudi, Elmadany & Abdul-Mageed

Clustering Monolingual Vocabularies to Improve Cross-Lingual Generalization

Bassani

Pretrained Neural Models for Turkish Text Classification

Okur & Sertbaş

Augmenting BERT-style Models with Predictive Coding to Improve Discourse-level Representations

Araujo et al.

BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation

Xu, Van Durme & Murray

ReasonBERT: Pre-trained to Reason with Distant Supervision

Deng et al.

gpt-c

Grankin

ChessCoach

Butner

Performance of chemical structure string representations for chemical image recognition using transformers

Rajan, Zielesny & Steinbeck

TUNiB-Electra

Kim et al.

An Approach to Extractive Bangla Question Answering Based On BERT-Bangla And BQuAD

Saha et al.

TRC로 월 몇만원에 GPU 수십개급의.. TPU 사용 가능

Lee

Characterizing Possible Failure Modes in Physics-Informed Neural Networks

Krishnapriyan et al.

An Empirical Exploration in Quality Filtering of Text Data

Gao

May

ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning

Yao et al.

Wav2Vec2

Gupta

Flexible Architectures for Image Synthesis

Jain

EfficientNet JAX - Flax Linen and Objax

Wightman

TensorFlow Datasets IO (tfdsio)

Nguyen

Scientific Claim Verification with VERT5ERINI

Pradeep et al.

Detecting Anatomical and Functional Connectivity Relations in Biomedical Literature via Language Representation Models

Ozyurt et al.

BioELECTRA:Pretrained Biomedical text Encoder using Discriminators

Kanakarajan, Kundumani & Sankarasubbu

BioM-Transformers: Building Large Biomedical Language Models with BERT, ALBERT and ELECTRA

Alrowili & Vijay-Shanker

Stress Test Evaluation of Biomedical Word Embeddings

Araujo et al.

Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level

Zhong et al.

How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering

Jiang et al.

CoolMomentum: a method for stochastic optimization by Langevin dynamics with simulated annealing

Borysenko & Byshkin

Tensorflow2 기반 Seq2Seq 모델, 학습, 서빙 코드 구현

Park

DeepDarts: Modeling Keypoints as Objects for Automatic Scorekeeping in Darts using a Single Camera

McNally et al.

KLUE: Korean Language Understanding Evaluation

Park et al.

How Deep is your Learning: the DL-HARD Annotated Deep Learning Dataset

Mackie, Dalton & Yates

hebrew-gpt_neo

Adler

Unbiased Monte Carlo Cluster Updates with Autoregressive Neural Networks

Wu et al.

April

Contextualized Query Embeddings for Conversational Search

Lin, Yang & Lin

Mesh Transformer JAX

Wang

DECIMER1.0: Deep Learning for Chemical Image Recognition using Transformers

Rajan, Zielesny & Steinbeck

Clinical BERT Models Trained on Pseudo Re-identified MIMIC-III Notes

Lehman et al.

Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model

Kummervold et al.

Categorising Vaccine Confidence with TransformerBased Machine Learning Model: The Nuances of Vaccine Sentiment on Twitter

Kummervold et al.

City-Scale Simulation Of Covid-19 Pandemic & Intervention Policies Using Agent-Based Modelling

Suryawanshi et al.

CodeTrans: Towards Cracking the Language of Silicone's Code Through Self-Supervised Deep Learning and High Performance Computing

Elnaggar et al.

Arabic Compact Language Modelling for Resource Limited Devices

Alyafeai & Ahmad

Igor Ivanov: Harnessing Machine Learning Skills to Reduce Damages from Tropical Storms

Radiant Earth Foundation

Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons

Laschowski et al.

InAugment: Improving Classifiers via Internal Augmentation

Arar, Shamir & Bermano

IndT5: A Text-to-Text Transformer for 10 Indigenous Languages

Nagoudi et al.

Samanantar: The Largest Publicly Available Parallel Corpora Collection for 11 Indic Languages

Ramesh et al.

Self-Supervised Representation Learning with Relative Predictive Coding

Tsai et al.

Virtual Sensing and Sensors Selection for Efficient Temperature Monitoring in Indoor Environments

Brunello et al.

2020

2019

February

Diagnose and Explain

d'Almeida

2018

Don't see your TRC-supported work here?

Please let us know about it by filling out this short form .