valine 2 days ago

>> For each LLM, we extract static, token-level embeddings from its input embedding layer (the ‘E‘matrix). This choice aligns our analysis with the context-free nature of stimuli typical in human categorization experiments, ensuring a comparable representational basis.

They're analyzing input embedding models, not LLMs. I'm not sure how the authors justify making claims about the inner workings of LLMs when they haven't actually computed a forward pass. The EMatrix is not an LLM, its a lookup table.

Just to highlight the ridiculousness of this research, no attention was computed! Not a single dot product between keys and queries. All of their conclusions are drawn from the output of an embedding lookup table.

The figure showing their alignment score correlated with model size is particularly egregious. Model size is meaningless when you never activate any model parameters. If Bert is outperforming Qwen and Gemma something is wrong with your methodology.

3
blackbear_ 1 day ago

Note that the token embeddings are also trained, therefore their values do give some hints on how a model is organizing information.

They used token embeddings directly and not intermediate representations because the latter depend on the specific sentence that the model is processing. Data on human judgment was however collected without any context surrounding each word, thus using the token embeddings seem to be the most fair comparison.

Otherwise, what sentence(s) would you have used to compute the intermediate representations? And how would you make sure that the results aren't biased by these sentences?

navar 1 day ago

You can process a single word through a transformer and get the corresponding intermediate representations.

Though it sounds odd there is no problem with it and it would indeed return the model's representation of that single word as seen by the model without any additional context.

valine 1 day ago

Embedding models are not always trained with the rest of the model. That’s the whole idea behind VLLMs. First layer embeddings are so interchangeable you can literally feed in the output of other models using linear projection layers.

And like the other commenter said, you can absolutely feed single tokens through the model. Your point doesn’t make any sense though regardless. How about priming the model with “You’re a helpful assistant” just like everyone else does.

boroboro4 1 day ago

It’s mind blowing LeCun is listed as one of the authors.

I would expect model size to correlate with alignment score because usually model sizes correlate with hidden dimension. But also opposite can be true - bigger models might shift more basic token classification logic into layers and hence embedding alignment can go down. Regardless feels like pretty useless research…

danielbln 1 day ago

Leaves a bit of a taste considering LeCun's famously critical stance on auto-regressive transformer LLMs.

throwawaymaths 1 day ago

the llm is also a lookup table! but your point is correct. they should have looked at subsequent layers that aggregate information over distance.