matthewdicks98 commited on
Commit
9bd8206
·
verified ·
1 Parent(s): e06c1c9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -39
README.md CHANGED
@@ -24,40 +24,43 @@ size_categories:
24
 
25
  ## Who is NOSIBLE?
26
 
27
- NOSIBLE is a web-scale vertical search engine. Our worldwide media surveillance products help companies build AI systems that see every worldwide event and act with complete situational awareness. In short, we help companies know everything, all the time. The financial institutions we work with rely on us to deliver media intelligence from every country in every language in real-time. Shortcomings in existing financial datasets and financial models are what inspired us to release this dataset and related models.
28
 
29
- - **[NOSIBLE Financial Sentiment v1.1 Base](https://huggingface.co/NOSIBLE/financial-sentiment-v1.1-base)**
30
- - [NOSIBLE Forward Looking v1.1 Base](https://huggingface.co/NOSIBLE/forward-looking-v1.1-base)
31
- - [NOSIBLE Prediction v1.1 Base](https://huggingface.co/NOSIBLE/prediction-v1.1-base)
32
 
33
  ## What is it?
34
 
35
- The NOSIBLE Financial Sentiment Dataset is an open collection of **100,000** cleaned, deduplicated, and labeled financial news text samples. The data was extracted directly from the NOSIBLE search engine in response to real queries asked by real financial institutions. Each sample has been labeled as **positive**, **negative**, or **neutral** using a sophisticated labeling pipeline (more information coming soon, and a brief explanation below). The models we trained on the NOSIBLE Financial Sentiment Dataset outperform models trained on the Financial PhraseBank out of sample.
 
 
 
 
36
 
37
  ## How to use it
38
  Using the [HuggingFace datasets library](https://huggingface.co/docs/datasets/):
39
 
40
- You can install it with `pip install datasets`, and must login using e.g. `hf auth login` to access this dataset.
41
 
42
  ```python
43
  from datasets import load_dataset
44
 
45
  dataset = load_dataset("NOSIBLE/financial-sentiment")
46
-
47
  print(dataset)
 
48
 
49
- # DatasetDict({
50
- # train: Dataset({
51
- # features: ['text', 'label', 'netloc', 'url'],
52
- # num_rows: 100000
53
- # })
54
- # })
55
 
56
- # What's next?
57
- # Train your model 🤖
58
- # Profit 💰
 
 
 
 
59
  ```
60
 
 
 
61
  ## Dataset Structure
62
 
63
  ### Data Instances
@@ -75,40 +78,47 @@ The following is an example sample from the dataset:
75
 
76
  ### Data Fields
77
 
78
- - `text` (string): A text chunk from a document.
79
- - `label` (string): The financial sentiment label of the text, sourced from LLMs and refined with active learning (an iterative relabeling process).
80
- - `netloc` (string): The network location (domain) of the document.
81
  - `url` (string): The URL of the document.
82
 
83
  ## Dataset creation
84
 
85
  ### Data source
86
- The dataset was sampled from NOSIBLE datafeeds, which provides web-scale surveillance data to customers. Samples consist of top-ranked search results from the NOSIBLE search engine in response to safe, curated queries. All data is sourced exclusively from the public web.
87
 
88
  ### Relabeling algorithm
89
- The dataset's label field was annotated by LLMs and refined using an active learning algorithm called relabeling.
90
 
91
  The algorithm outline is as follows:
92
 
93
- 1. Hand label a candidate set of ~200 samples to use as a test bed to refine the prompt used by the LLM labelers to classify the text.
94
  2. Label a set of 100k samples with LLM labelers:
95
- - `x-ai/grok-4-fast`
96
- - `x-ai/grok-4-fast:thinking`
97
- - `google/gemini-2.5-flash`
98
- - `openai/gpt-5-nano`
99
- - `openai/gpt-4.1-mini`
100
- - `openai/gpt-oss-120b`
101
- - `meta-llama/llama-4-maverick`
102
- - `qwen/qwen3-32b`
103
- 3. Train a linear model on the labels using the majority vote of the LLM labelers.
104
- 4. Iterative relabeling (active learning steps) to improve the label quality:
105
- - Evaluate linear model's predictions over the samples.
106
- - Find disagreements: samples where the LLM labelers agree on a label, but the model has predicted a different label.
107
- - Consult a much larger LLM, the oracle, to evaluate the model's prediction and relabel the sample if it agrees with the LLM labels.
108
- - Drop the worst performing LLM labelers from the ensemble.
109
- - Repeat the process with the remaining LLM labelers until the number of samples relabeled reaches 0.
110
- - Store the refined relabeled dataset.
111
- 5. This is the final dataset used for training the [NOSIBLE Financial Sentiment v1.1 Base](https://huggingface.co/NOSIBLE/financial-sentiment-v1.1-base) model, which is a finetune.
 
 
 
 
 
 
 
112
 
113
  ## Additional information
114
 
 
24
 
25
  ## Who is NOSIBLE?
26
 
27
+ [**NOSIBLE**](https://www.nosible.com/) is a vertical web-scale search engine. Our worldwide media surveillance products help companies build AI systems that see every worldwide event and act with complete situational awareness. In short, we help companies know everything, all the time. The financial institutions we work with rely on us to deliver media intelligence from every country in every language in real-time. Shortcomings in existing financial datasets and financial models are what inspired us to release this dataset and related models.
28
 
29
+ - [**NOSIBLE Financial Sentiment v1.1 Base**](https://huggingface.co/NOSIBLE/financial-sentiment-v1.1-base)
 
 
30
 
31
  ## What is it?
32
 
33
+ The NOSIBLE Financial Sentiment Dataset is an open collection of **100,000** cleaned, deduplicated, and sentiment-labeled news samples. Each label reflects the financial sentiment of a short text snippet, categorizing it based on whether the described events are likely to have a **positive**, **neutral**, or **negative** financial impact on a company.
34
+
35
+ All text is sourced from the **NOSIBLE Search Feeds** product using a curated set of finance-related queries. Sentiment labels are assigned through a multi-stage, LLM-based annotation pipeline (described below).
36
+
37
+ Models trained using this dataset outperform those trained solely on the [**Financial PhraseBank**](https://huggingface.co/datasets/takala/financial_phrasebank), even when PhraseBank is used only as an unseen evaluation dataset.
38
 
39
  ## How to use it
40
  Using the [HuggingFace datasets library](https://huggingface.co/docs/datasets/):
41
 
42
+ Install the dataset library with `pip install datasets`, then load the dataset:
43
 
44
  ```python
45
  from datasets import load_dataset
46
 
47
  dataset = load_dataset("NOSIBLE/financial-sentiment")
 
48
  print(dataset)
49
+ ```
50
 
51
+ #### Expected Output
 
 
 
 
 
52
 
53
+ ```text
54
+ DatasetDict({
55
+ train: Dataset({
56
+ features: ['text', 'label', 'netloc', 'url'],
57
+ num_rows: 100000
58
+ })
59
+ })
60
  ```
61
 
62
+ You can also access this dataset through any interface supported by [Hugging Face](https://huggingface.co/).
63
+
64
  ## Dataset Structure
65
 
66
  ### Data Instances
 
78
 
79
  ### Data Fields
80
 
81
+ - `text` (string): A text chunk from a search result.
82
+ - `label` (string): The financial-sentiment label.
83
+ - `netloc` (string): The domain name of the source document.
84
  - `url` (string): The URL of the document.
85
 
86
  ## Dataset creation
87
 
88
  ### Data source
89
+ The dataset was sampled from the NOSIBLE Search Feeds, which provides web-scale surveillance data to customers. Samples consist of top-ranked search results from the NOSIBLE search engine in response to safe, curated, and finance-specific queries. All data is sourced exclusively from the public web.
90
 
91
  ### Relabeling algorithm
92
+ Labels were first generated using multiple LLM annotators and were then refined using an active-learning–based relabeling loop.
93
 
94
  The algorithm outline is as follows:
95
 
96
+ 1. Hand-label ~200 samples to tune prompts for the LLM annotators.
97
  2. Label a set of 100k samples with LLM labelers:
98
+ - [`xAI: Grok 4 Fast`](https://openrouter.ai/x-ai/grok-4-fast)
99
+ - [`xAI: Grok 4 Fast (reasoning enabled)`](https://openrouter.ai/x-ai/grok-4-fast)
100
+ - [`Google: Gemini 2.5 Flash`](https://openrouter.ai/google/gemini-2.5-flash)
101
+ - [`OpenAI: GPT-5 Nano`](https://openrouter.ai/openai/gpt-5-nano)
102
+ - [`OpenAI: GPT-4.1 Mini`](https://openrouter.ai/openai/gpt-4.1-mini)
103
+ - [`OpenAI: gpt-oss-120b`](https://openrouter.ai/openai/gpt-oss-120b)
104
+ - [`Meta: Llama 4 Maverick`](https://openrouter.ai/meta-llama/llama-4-maverick)
105
+ - [`Qwen: Qwen3 32B`](https://openrouter.ai/qwen/qwen3-32b)
106
+ 3. Train multiple linear models to predict the majority-vote of the LLM labelers. The features are the text embeddings of the following models:
107
+ - [`Qwen3-Embedding-8B`](https://openrouter.ai/qwen/qwen3-embedding-8b)
108
+ - [`Qwen3-Embedding-4B`](https://openrouter.ai/qwen/qwen3-embedding-4b)
109
+ - [`Qwen3-Embedding-0.6B`](https://openrouter.ai/qwen/qwen3-embedding-0.6b)
110
+ - [`OpenAI: Text Embedding 3 Large`](https://openrouter.ai/openai/text-embedding-3-large)
111
+ - [`Google: Gemini Embedding 001`](https://openrouter.ai/google/gemini-embedding-001)
112
+ - [`Mistral: Mistral Embed 2312`](https://openrouter.ai/mistralai/mistral-embed-2312)
113
+ 4. Perform iterative relabeling:
114
+ - Compare all the linear models' predictions to the majority-vote label.
115
+ - Identify disagreements where all linear models agree but the majority-vote label does not.
116
+ - Use a larger LLM (“oracle”) to evaluate ambiguous cases and relabel when appropriate.
117
+ - Drop the worst performing linear models from the ensemble.
118
+ - Repeat until no additional samples require relabeling.
119
+ 5. This is the final dataset used for training the [NOSIBLE Financial Sentiment v1.1 Base](https://huggingface.co/NOSIBLE/financial-sentiment-v1.1-base) model.
120
+
121
+ We used [`OpenAI: GPT-5.1`](https://openrouter.ai/openai/gpt-5.1) as the oracle.
122
 
123
  ## Additional information
124