Create Account
Log In
Dark
chart
exchange
Premium
Terminal
Screener
Stocks
Crypto
Forex
Trends
Depth
Close
Check out our Dark Pool Levels

ALUUSDT
ALU / Tether USD
crypto Composite

Real-time
Jul 9, 2025 7:32:46 AM EDT
0.006216USDT-3.866%(-0.000250)91,554,051ALU569,607USDT
0.006192Bid   0.006239Ask   0.000047Spread
OverviewHistoricalDepthTrends
Composite
0.006216
Huobi
0.006216
ALU Reddit Mentions
Subreddits
Limit Labels     

We have sentiment values and mention counts going back to 2017. The complete data set is available via the API.
Take me to the API
ALU Specific Mentions
As of Jul 9, 2025 7:31:43 AM EDT (1 min. ago)
Includes all comments and posts. Mentions per user per ticker capped at one per hour.
82 days ago • u/LowBetaBeaver • r/algotrading • llms_for_trading • C
I've never seen an idea from an LLM that made sense and wasn't also incredibly overfit. And if you're using LLMs to do everything except idea generation, then are you really using the LLM to "trade using the LLM", or are you just using it as a productivity tool?
Please note that the below answers a question specifically about LLMs. I'm not suggesting there isn't another model that will be developed that can do this, just that language models are not particularly well suited for the quantitative part of quantitative analysis.

A few thoughts:
LLMs are large language models, which are associative models that use probability to make connections between ideas and expected results. Bear with me while I explain myself: If an LLM is asked 2+2, it does NOT go into the computer's ALU and do 2 + 2. It searches its memory and figures out that most of the time in the training data when it sees 2 + 2 it is following by "= 4".
This is an awful way to do math, so what models like ChatGPT *actually* do is attempt to understand intent. "User is asking for 2 + 2. 2 and 2 are both numbers, and + is an operator... they must be looking to evaluate the expression. Now let's call a secondary, non-LLM (or in this case the ALU) to perform these operations". Great, now if we see numbers that no one has asked before, we can actually answer the question.
Let's take this a few step further: say you tell LLM1 you want to run a regression of financials against price. Assuming ML has been enabled on the model, what does it do here? First, it evaluates what "financials" are. Maybe you give it a list, or you tell it to use the metrics on the yahoo finance board. Cool, so it throws it all into sklearn.glm (which is what you asked for) and it returns garbage because you have all noise and no signal.
Maybe you can ask it to instead use only the metrics that it thinks are most relevant to predicting stock price (note: this is *your* idea, not chatGPT's... at this point, would you still consider it the model doing the work?). Maybe it then subsets the data and maybe you have some alpha or maybe not. But what you will get is the most likely subset as defined by the training data. Now, it's almost by definition that this has already been done before (associative model), but that's neither here nor there.
When you ask it to do the regression, the LLM is not regressing - it's calling a separate model to run the regression for you. You ask it to write the code and after an hour of playing around you finally get your regression working.

With this situation, what benefit has the LLM actually provided?
1. It helped narrow down your inputs by providing you with the answer to "what do most people most closely associate with stock price movements out of the list I defined" <- and you had to implicitly define this question anyway
2. coding help

As a financial data scientist for going on 10 years, chat GPT doesn't help me with new ideas per se. I bounce ideas off of it, if I find something meaningful it's great and providing a primer and can be like an expert Q&A, but it's not making the creative connections. The quality of the output is based on the quality of the input: if you don't ask good questions you won't get good answers, and those questions define the trading strategies.
The last major part to this is that, because it implements *your* ideas faster, it massively compresses the feedback cycle, which allows you to conduct your research faster.
sentiment 0.91
82 days ago • u/LowBetaBeaver • r/algotrading • llms_for_trading • C
I've never seen an idea from an LLM that made sense and wasn't also incredibly overfit. And if you're using LLMs to do everything except idea generation, then are you really using the LLM to "trade using the LLM", or are you just using it as a productivity tool?
Please note that the below answers a question specifically about LLMs. I'm not suggesting there isn't another model that will be developed that can do this, just that language models are not particularly well suited for the quantitative part of quantitative analysis.

A few thoughts:
LLMs are large language models, which are associative models that use probability to make connections between ideas and expected results. Bear with me while I explain myself: If an LLM is asked 2+2, it does NOT go into the computer's ALU and do 2 + 2. It searches its memory and figures out that most of the time in the training data when it sees 2 + 2 it is following by "= 4".
This is an awful way to do math, so what models like ChatGPT *actually* do is attempt to understand intent. "User is asking for 2 + 2. 2 and 2 are both numbers, and + is an operator... they must be looking to evaluate the expression. Now let's call a secondary, non-LLM (or in this case the ALU) to perform these operations". Great, now if we see numbers that no one has asked before, we can actually answer the question.
Let's take this a few step further: say you tell LLM1 you want to run a regression of financials against price. Assuming ML has been enabled on the model, what does it do here? First, it evaluates what "financials" are. Maybe you give it a list, or you tell it to use the metrics on the yahoo finance board. Cool, so it throws it all into sklearn.glm (which is what you asked for) and it returns garbage because you have all noise and no signal.
Maybe you can ask it to instead use only the metrics that it thinks are most relevant to predicting stock price (note: this is *your* idea, not chatGPT's... at this point, would you still consider it the model doing the work?). Maybe it then subsets the data and maybe you have some alpha or maybe not. But what you will get is the most likely subset as defined by the training data. Now, it's almost by definition that this has already been done before (associative model), but that's neither here nor there.
When you ask it to do the regression, the LLM is not regressing - it's calling a separate model to run the regression for you. You ask it to write the code and after an hour of playing around you finally get your regression working.

With this situation, what benefit has the LLM actually provided?
1. It helped narrow down your inputs by providing you with the answer to "what do most people most closely associate with stock price movements out of the list I defined" <- and you had to implicitly define this question anyway
2. coding help

As a financial data scientist for going on 10 years, chat GPT doesn't help me with new ideas per se. I bounce ideas off of it, if I find something meaningful it's great and providing a primer and can be like an expert Q&A, but it's not making the creative connections. The quality of the output is based on the quality of the input: if you don't ask good questions you won't get good answers, and those questions define the trading strategies.
The last major part to this is that, because it implements *your* ideas faster, it massively compresses the feedback cycle, which allows you to conduct your research faster.
sentiment 0.91


Share
About
Pricing
Policies
Markets
API
Info
tz UTC-4
Connect with us
ChartExchange Email
ChartExchange on Discord
ChartExchange on X
ChartExchange on Reddit
ChartExchange on GitHub
ChartExchange on YouTube
© 2020 - 2025 ChartExchange LLC