Create Account
Log In
Dark
chart
exchange
Premium
Terminal
Screener
Stocks
Crypto
Forex
Trends
Depth
Close
Check out our Dark Pool Levels

DNN
Denison Mines Corp
stock NYSEAMERICAN

At Close
May 16, 2025 3:59:30 PM EDT
1.44USD-4.333%(-0.07)57,929,242
0.00Bid   0.00Ask   0.00Spread
Pre-market
May 16, 2025 9:18:30 AM EDT
1.52USD+1.327%(+0.02)89,018
After-hours
May 16, 2025 4:52:30 PM EDT
1.44USD+0.348%(+0.01)14,949
OverviewOption ChainMax PainOptionsPrice & VolumeSplitsHistoricalExchange VolumeDark Pool LevelsDark Pool PrintsExchangesShort VolumeShort Interest - DailyShort InterestBorrow Fee (CTB)Failure to Deliver (FTD)ShortsTrendsNewsTrends
DNN Reddit Mentions
Subreddits
Limit Labels     

We have sentiment values and mention counts going back to 2017. The complete data set is available via the API.
Take me to the API
DNN Specific Mentions
As of May 17, 2025 4:52:25 AM EDT (<1 min. ago)
Includes all comments and posts. Mentions per user per ticker capped at one per hour.
4 days ago • u/Wild-Dependent4500 • r/algotrading • longtime_professional_software_engineer_and • C
I benchmarked three architectures: deep neural networks (DNNs), support-vector regression (SVR), and transformers. The DNN consistently delivered the better results. You can explore the feature matrix here (refreshed every 5 minutes): [https://ai2x.co/data\_1d\_update.csv](https://ai2x.co/data_1d_update.csv)
build\_matrix() code is as follows.
def build_matrix():
scaled_features = df_features.values
scaled_target = df_target.values
print("scaled_features.shape", scaled_features.shape)
print("scaled_target.shape", scaled_target.shape)

# Split into sequences (X) and targets (y)
X, y = [], []
for i in range(len(scaled_features) - SEQ_LENGTH):
X.append(scaled_features[i:i + SEQ_LENGTH])
y.append(scaled_target[i + SEQ_LENGTH])
X, y = np.array(X), np.array(y)
print("X.shape", X.shape)
print("y.shape", y.shape)
X_flat = X.reshape(X.shape[0], -1)
print("X_flat.shape", X_flat.shape)

# Train-test split (last 100 samples for testing)
split = len(X_flat) - m_test_size
X_train, X_test = X_flat[:split], X_flat[split:]
y_train, y_test = y[:split], y[split:]

# Flatten y to 1D arrays if needed for SVR
y_train = y_train.flatten()
y_test = y_test.flatten()
return X_train, X_test, y_train, y_test
sentiment 0.44
4 days ago • u/Wild-Dependent4500 • r/algotrading • longtime_professional_software_engineer_and • C
I benchmarked three architectures: deep neural networks (DNNs), support-vector regression (SVR), and transformers. The DNN consistently delivered the better results. You can explore the feature matrix here (refreshed every 5 minutes): [https://ai2x.co/data\_1d\_update.csv](https://ai2x.co/data_1d_update.csv)
build\_matrix() code is as follows.
def build_matrix():
scaled_features = df_features.values
scaled_target = df_target.values
print("scaled_features.shape", scaled_features.shape)
print("scaled_target.shape", scaled_target.shape)

# Split into sequences (X) and targets (y)
X, y = [], []
for i in range(len(scaled_features) - SEQ_LENGTH):
X.append(scaled_features[i:i + SEQ_LENGTH])
y.append(scaled_target[i + SEQ_LENGTH])
X, y = np.array(X), np.array(y)
print("X.shape", X.shape)
print("y.shape", y.shape)
X_flat = X.reshape(X.shape[0], -1)
print("X_flat.shape", X_flat.shape)

# Train-test split (last 100 samples for testing)
split = len(X_flat) - m_test_size
X_train, X_test = X_flat[:split], X_flat[split:]
y_train, y_test = y[:split], y[split:]

# Flatten y to 1D arrays if needed for SVR
y_train = y_train.flatten()
y_test = y_test.flatten()
return X_train, X_test, y_train, y_test
sentiment 0.44


Share
About
Pricing
Policies
Markets
API
Info
tz UTC-4
Connect with us
ChartExchange Email
ChartExchange on Discord
ChartExchange on X
ChartExchange on Reddit
ChartExchange on GitHub
ChartExchange on YouTube
© 2020 - 2025 ChartExchange LLC