The paper discussing the transformer architecture. The principle focus of the epistemic evaluation.
A paper describing the importance of translation tasks in evaluation of NLP models.
Article providing statistic for the ChatGPT rate of user growth.
Paper showing how multimodal large models exhibit emergent properties in modalities outside of where they were originated.
An article describing the process of fine-tuning.
The set epistemic values used for the overall review.
Technical report published by OpenAI on GPT-4 providing some (but nowhere near enough) information on the model.
Article describing an instance of ChatGPT hallucinating a quote.
Paper describing a NLP model which was the state-of-the-art before the transformer. Notably gives training times.
Paper describing BERT, a pre-trained (and tested with fine-tuning) transformer. Provides metrics and training times.
Paper discussing the legality and ethics of webscraping. Mainly to show that there are potential risks associated with it.
Article which quotes Sam Altman about the cost of training GPT-4.
Paper describing the LSTM architecture. One of the leading NLP architectures before the advent of the transformer.
Paper discussing the vanishing gradient problem. Which the solving (one way) of created the LSTM and GRU and therefore indirectly the transformer.
Paper describing backpropagation the update rule used for non feed-forward neural networks like RNNs (LSTMs/GRUs).
Article describing how freelances on Fiverrr and similar sites are experiencing a work-drought after ChatGPT came out.
Article describing how ChatGPT was a key issue related to the WGA strike.
Article describing some benefits to ChatGPT.
Article the restructuring of OpenAI from not-for-profit to for-profit and the consequences of that.