Saturday, November 11, 2023

Pobieżna analiza wyborów do Parlamentu Polskiego 2023

[mapa ze strony https://wbdata.pl/wybory-2023-mapy/]

Zadałem sobie pytanie czy ostatnie wybory 15.10.2023 były uczciwe i czy można to sprawdzić statystycznie przy pomocy prostej analizy wspomaganej intuicją.
Zatem, nie będzie to twardy dowód na oszustwa wyborcze.
Dane do analizy można odnaleźć na stronie https://wybory.gov.pl/sejmsenat2023/pl/dane_w_arkuszach), w części 'Wyniki głosowania na listy Sejmowe'. Plik 'po okręgach Sejm CSV XLSX' zawiera dane ze wszystkich obwodow wyborczych.

Wprowadzenie

Zacznijmy od paru defnicji. W dalszej czesci bede opieral sie na paru nowych zmiennych, jak napisalem analiza jest bardzo uproszczona. Ze wzgledu na wielość partii, wprowadzam następujące grupy:
  1. 'OPOZYCJA' = 'KOALICYJNY KOMITET WYBORCZY TRZECIA DROGA POLSKA 2050 SZYMONA HOŁOWNI - POLSKIE STRONNICTWO LUDOWE'+
    'KOALICYJNY KOMITET WYBORCZY KOALICJA OBYWATELSKA PO .N IPL ZIELONI']+
    'KOMITET WYBORCZY NOWA LEWICA'
  2. 'INNE PATRIE' = 'KOMITET WYBORCZY BEZPARTYJNI SAMORZĄDOWCY'+
    'KOMITET WYBORCZY WYBORCÓW MNIEJSZOŚĆ NIEMIECKA'+
    'KOMITET WYBORCZY KONFEDERACJA WOLNOŚĆ I NIEPODLEGŁOŚĆ'+
    'KOMITET WYBORCZY POLSKA JEST JEDNA'+
    'KOMITET WYBORCZY WYBORCÓW RUCHU DOBROBYTU I POKOJU'+
    'KOMITET WYBORCZY NORMALNY KRAJ'+
    'KOMITET WYBORCZY ANTYPARTIA'+
    'KOMITET WYBORCZY RUCH NAPRAWY POLSKI'
  3. 'KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ' - jako osobna grupa
Ponieważ 'KOMITET WYBORCZY RUCH NAPRAWY POLSKI' i 'OPOZYCJA' są głównymi graczami, dlatego w dalszej części będę analizował 3 grupy danych:
  1. wszystkie obwody: bez rozróżnienia
  2. a) OPOZYCJA > KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ: gdy suma głosów na 'OPOZYCJA' w obwodzie jest większa od liczby głosów na 'KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ'
  3. b) OPOZYCJA < KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ: gdy suma głosów na 'KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ' w obwodzie jest większa od liczby głosów na 'OPOZYCJA'

Jako zmienną do porównywania rozkładów grup partyjnych używam stosunku liczby głosów oddanych na grupę partyjną w obwodzie do całkowitej liczby głosów w tym obwodzie.

Rozkłady oddanych głosów

Rozkłady wyglądają tak:
dla grupy wszystkie obwody:
Rys. 1


Jak widać, rozkłady 'KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ' i 'OPOZYCJA' są niemal lustrzanymi odbiciami względem wartości $\approx 0.5$. Rozkłady dla następnych grup danych:
dla grupy a) OPOZYCJA > KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ
Rys. 2


i dla grupy b) OPOZYCJA < KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ

Rys. 3


W analizie danych takich jak ta , zwykle mamy do czynienia z rozkładami w przybliżeniu symetrycznymi, tak jak dla grup: a) OPOZYCJA > KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ i wszystkie obwody. W przypadku ostatniej grupy (b) OPOZYCJA < KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ) mamy asymetryczny podział pomiędzy grupami politycznymi 'KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ' i 'OPOZYCJA' w pobliżu 'stosunek glosow na grupe partyjna do sumy wszystkich oddanych głosow - na obwod'$\approx 0.45$. Poparcie dla 'KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ' bardzo ostro maleje do 0. rozkład poparcia dla 'OPOZYCJA', także wydaje się opada stromiej, ale nie jest to tak dramatyczne jak dla 'KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ'.
Ponieważ ten aspekt wygląda dziwnie, zatem w dalszym ciągu będę analizował głosy, w tych obwodach wyborczych dla których wartosc 'stosunek glosow na grupe partyjna do sumy wszystkich oddanych glosow - na obwod' wynosi $0.3 - 0.6$.

W poszukiwaniu manipulacji

W tym celu wybieram przedzial grupę danych 'b) OPOZYCJA < KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ' dla których wartosc 'stosunek glosow na grupe partyjna do sumy wszystkich oddanych głosów - na obwod' wynosi $0.3 - 0.6$ i wyliczam rozkład 2giej cyfry z wartości glosowań na 'KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ' i 'OPOZYCJA'. Jesli głosy nie zostały zmanipulowane, to otrzymany rozkład powinien być zgodny z prawem Benforda dla 2giej cyfry (Benford\'s law - Wikipedia)
Z reguły, sprawdzanie rozkładu 2giej cyfry ze zbioru wartości jest mniej zależne na dodatkowe czynniki mogące powodowa przekłamania, np. różnice w wielkości obwodów wyborczych, rozdzielone mocno rozkłady analizowanych zmiennych. Tego typu czynniki powodują, że rozkład 1szej cyfry wartosci zmiennych nie jest wiarygodny, nawet jeśli taki rozkład mocno odbiega od oczekiwanego rozkładu wedle praw Bendforda dla 1szej liczby.

Do analizy rozkładu 2giej cyfry wybrałem te zmienne ze skopiowanych danych z obwodow wyborczych, które powinny najbardziej bezpośrednio ukazywać ewentualne manipulacje:
  1. 'Liczba głosów ważnych oddanych łącznie na wszystkie listy kandydatów',
  2. 'Liczba głosów nieważnych',
  3. 'W tym z powodu postawienia znaku „X” obok nazwiska dwóch lub większej liczby kandydatów z różnych list'
a także liczbę oddanych głosów na 'KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ', 'OPOZYCJA' i 'INNE PATRIE'. Jako bląd dopasowania jest wyliczany Chi2 (oznaczony na wykresach poniżej jako CHI2). Otrzymane rozkłady:
Rys. 4


Na wykreach powyżej, przez $N$ oznaczam liczbe wartości z których wyliczono rozkład.
Dla przypadku zmiennej 'Liczba głosów ważnych oddanych łącznie na wszystkie listy kandydatów' (wykres 1) powyżej): wynik sugeruje wiekszą manipulację danych dla obwodów z kategorii 'a) OPOZYCJA > KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ', cyfry 0 i 1 można interpretować jako dołożone.
Dla zmiennej 'Liczba głosów nieważnych' (wykres 2) powyżej): błędy dopasowania są podobne dla obu grup danych 'a) OPOZYCJA > KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ' i 'b) OPOZYCJA < KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ'. 'Liczba głosów nieważnych' jest większa dla przypadku 'a) OPOZYCJA > KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ'.
Zmienna 'W tym z powodu postawienia znaku „X” obok nazwiska dwóch lub większej liczby kandydatów z różnych list' pokazuje jeszcze bardziej znaczące statystycznie różnice dla obu grup danych (a) i b)).
Wykresy 2) i 3) powyżej sugerują manipulacje związane ze zwiększaniem liczby głosów nieważnych.

Na następnym rysunku pokazuję analizę liczby głosow poparcia na grupy polityczne.
Rys. 5


Poparcie dla grupy 'OPOZYCJA' (wykres 1) powyżej) ma mały błąd dopasowania (CHI2 < 1.) dla obu grup danych (a) i b)). Trudno tu wskazać na manipulacje.
Poparcie dla 'KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ' (wykres 2) powyżej) też jest obarczone małym błędem dopasowania - CHI2 < 1. dla grupy b) i CHI2 $\approx$ 2. dla grupy a). W grupie a) cyfry 0, 1 rozkładu poparcia są poniżej oczekiwanego rozkładu.
W przypadku 'INNE PATRIE', widać również większy błąd dopasowania dla grupy danych a) niż dla grupy b).

Podsumowanie

W przeprowadzonej analizie pokazuje potencjalne miejsce, w którym mogło dojść do manipulacji głosow poparcia na grupy polityczne. Są to obwody wyborcze, w których poparcie 'KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ' i 'OPOZYCJA' jest zrównoważone (wartosc 'stosunek glosow na grupe partyjna do sumy wszystkich oddanych głosów - na obwod' wynosi $0.3 - 0.6$). Wykresy na Rys. 5 pokazują, że do pewnych manipulacji dochodziło częściej w obwodach należących do grupy danych 'a) OPOZYCJA > KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ' na niekorzyść 'KOMITET WYBORCZY PRAWO I SPRAWIEDLIWOŚĆ'.
Czy ta analiza udowadnia, że wybory zostały sfałszowane ? Nie, tylko sugeruje statystycznie znalezione sygnatury manipulacji, które są małe. Jak napisałem we wstępie, niniejsza analiza nie jest dowodem na oszustwa wyborcze.


Dziękuje za przeczytanie !

Monday, June 13, 2022

A word on the development and implementation of machine learning techniques for IoT data processing

IoT data analytics typically involves creating processes to predict failures, situations or find anomalies online. By process I mean here a model created based on Machine learning or/and statistics and its full implementation in the production cycle.
It has been reported that up to 80% of IoT data projects fail (the project was not completed or the company gained nothing from its implementation). If we look at the highlighted reasons for failures, we find mostly data-related topics, hazy descriptions of problems with Machine Learning methods without going into details, vaguely defined goals, and many other reasons.

In this note, I would like to focus on choosing an algorithm for IoT data analysis purposes. Before working with IoT data, it's a good idea to determine earlier which analytical solution should be implemented: based on supervised or unsupervised methods.
Usually creating a good unsupervised algorithm is a difficult task, more difficult than a supervised algorithm. However, building the model itself is only part of the project. The other part is its application in the production process. So I propose to look at the whole thing: the creation of an analytical method and its productization.

Solution 1 - Supervised Model (SM):

  1. analytical model (R&D): this is a Data Scientist (DS) standard job:
    • selection of features (data and feature engineering),
    • model creation,
    • precise validation procedure with KPIs.
  2. Productization: we have to include a Data Engineer here also. Tasks to do:
    • data & feature engineering, preparation of computing environment,
    • framework for SM automatization including:
      • monitoring of the data shifts (features + target),
      • data selection for creating new models,
      • data labeling,
      • parameter hypertuning,
      • model retraining,
      • selection of the best model(s).

Solution 2 - Unsupervised Model (UM):

  1. R&D: DS job:
    • selection of features (data and feature engineering),
    • model creation,
    • precise validation procedure with KPIs.
    Tasks are mosty the same as for the SM case, but obviously the goal is more complex.
  2. Productization: it obeys DE + DS tasks. Main tasks to do:
    • data & feature engineering, preparation of computing environment.

Comparison of both solutions:
As you can see, creating a well-functioning production cycle for the SM is a very difficult task. In my opinion more difficult than creating a proper supervised model and much more time consuming.

On the other hand, for solution 2 based on the UM, putting the solution into production is very simple. The opposite picture we have for building analytical models.
Given the above considerations, the summary statement might look like this image:
The difficulties associated with the productization of the SM model are the main place where failures are to be expected and there are a lot of them. Maybe sometimes it is worth simplifying this part of the project and focus on building the UM model.

I hope you find my findings helpful. Thanks for reading !

Monday, January 3, 2022

Semantic text clustering: testing homogeneity of text clusters using Shannon entropy.

Many natural language processing (NLP) projects are based on the task of semantic text clustering. In short, we have a set of statements (texts, phrases, or sentences), and our goal is to group them semantically. Nowadays, this is quite a classical problem with a rich literature on clustering methods.

In spite of the simple formulation of the task, there are many problems during its realization. To make things more difficult, let's assume we are dealing with unlabeled data, which means we must rely on unsupervised techniques.

The first problem:

as we know, we can find many categories according to which we can try to group the given set of texts. In order to do this task well, we need to know the business case better. In other words, we need to know the question we want to answer using clustering. This problem is not the purpose of this note, so we will skip it.

The second problem:

suppose we have successfully performed text clustering. And the question arises: how do we know that all clusters contain the correct texts ? Answering this question is the purpose of my note .

Let us assume that a given method has generated $N$ groups/clusters , each of which contains a set of similar texts . It is known that every cluster of similar texts is more or less homogeneous. There are clusters that are completely wrong in the sense of similarity of texts. So to answer our original question
How do we know that all clusters contain correct texts ?
is to find those clusters that contain erroneous texts.

How to find the worst clusters ?

In this note, I would like to propose the following method for determining the worst clusters:
  1. checking intra-cluster similarity by computing the Shanon entropy of the set of texts belonging to a given cluster.
  2. using dynamically determined threshold entropy to select the worst clusters (based on the method presented in my blog https://dataanalyticsforall.blogspot.com/2021/05/semantic-search-too-many-or-too-few.html).
As data to illustrate our task I use data known as reuters-21578 (https://paperswithcode.com/dataset/reuters-21578).
Since the clustering stage is not our goal, this part of the work was done using the fast clustering method based on Agglomerative Clustering ( https://www.sbert.net/examples/applications/clustering/README.html#fast-clustering, code: https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/clustering/fast_clustering.py).

Each set of texts is characterized by a different degree of mutual homogeneity. The general idea of the method is to calculate the Shanon entropy for each identified text cluster, and then using the elbow rule to determine the homogeneity threshold (maximum entropy value) below which the clusters satisfy the homogeneity condition (are accepted). All clusters with entropy above the threshold should be discarded, used to find a better clustering algorithm or reanalyzed.
In any clustering method, we will get multiple single element clusters. Since in our method we check for homogeneity between sentences in a cluster, therefore I calculate entropy only for multi-element clusters.

The only parameters that needs to be introduced in the proposed method are:
  1. the 'approximation_order' which defines a number of of consecutive characters (known as ngrams) from which we create the probability distribution used later to calculate the Shanon entropy.
  2. The sensitivity parameter S (to adjust the aggressiveness in kneed detection [ kneed]) used in the Python kneed library.
The complete code is available at
https://github.com/Lobodzinski/Semantic_text_clustering__testing_homogeneity_of_text_clusters_using_Shannon-entropy.

To illustrate the method I show two pictures with the final results for two values of the parameter used to determine the Shannon entropy (approximation_order: 2 and 3)

All the values of the Shannon entropy approximations are ordered from smallest to largest (y-axis). The kneed library, given a parameter S (in this case S = 1), determines the best inflection point and this entropy value defines our maximum entropy value for the accepted sentence clusters. For a given approximation_order parameter (=2), we obtain 14 clusters which are inappropriately homogeneous.
Questionable clusters are:
Cluster 13, #10 Elements 
Cluster 23, #8 Elements 
Cluster 26, #8 Elements 
Cluster 28, #8 Elements 
Cluster 49, #6 Elements 
Cluster 56, #6 Elements 
Cluster 59, #5 Elements 
Cluster 62, #5 Elements 
Cluster 127, #4 Elements 
Cluster 137, #3 Elements 
Cluster 145, #3 Elements 
Cluster 152, #3 Elements 
Cluster 247, #3 Elements 
Cluster 275, #3 Elements 
For comparison, the next picture is calculated for approximation_order=3. In this case, we get 15 incorrect clusters:
Rejected clusters:
Cluster 0, #396 Elements 
Cluster 13, #10 Elements 
Cluster 23, #8 Elements 
Cluster 26, #8 Elements 
Cluster 28, #8 Elements 
Cluster 56, #6 Elements 
Cluster 59, #5 Elements 
Cluster 62, #5 Elements 
Cluster 109, #4 Elements 
Cluster 127, #4 Elements 
Cluster 137, #3 Elements 
Cluster 145, #3 Elements 
Cluster 152, #3 Elements 
Cluster 179, #3 Elements 
Cluster 275, #3 Elements 
Comparing the list of unacceptable clusters may give more information about the occurring heterogeneities in our text set.

I encourage you to test the method for yourself. In my case it turned out to be very useful.

The code:
https://github.com/Lobodzinski/Semantic_text_clustering__another_way_to_study_cluster_homogeneity. The outstanding advantage of the method is that it works fully autonomously without the need for human intervention.


Thanks for reading, please feel free to comment and ask questions if anything is unclear.

Thursday, August 12, 2021

Are solar power plants really green energy ? Continuation

This text is a continuation of my reflections on the impact of photovoltaic power plants on climate. In the part Are solar power plants really green energy ? I presented some figures which show that the heat generated by the currently working solar power plants (they produce more than 4 times more thermal energy than they produce in the form of electricity) is such a big part of the heat emitted by humans, that after converting this heat into CO2 amounts, it is more than a half of the effect generated in 2020 on the whole Earth!
This amount of heat energy is unlikely to have no effect on global warming.

More concrete signals that the development of a solar power plant infrastructure could lead to climate disruption and rising temperatures.


  1. Urban Heat Island effect (UHI)

    The UHI is a real phenomenon.
    The paper "The Effect of Urban Heat Island on Climate Warming in the Yangtze River Delta Urban Agglomeration in China" presents the effect of UHI on climate warming based on an analysis of the effects of urbanization rate, urban population and land use change on the warming rate of mean, minimum (night) and maximum (day) air temperature in the Yangtze River Delta (YRD) using observational data from 41 meteorological stations. In conclusion, the authors found that observations of daily mean, minimum, and maximum air temperature atmeasurement stations in the YRDUA from 1957 to 2010 showed significant long-term warming due to background warming and UHI. The warming rate of 0.108 to 0.483°C/decade for mean air temperature is generally consistent with the warming trend in other urban regions in China and other urban areas in the world.
    Thus, the authors showed that urbanization significantly enhanced local climate warming.
    The solar power plants based on photovoltaic panels are even hotter islands of heat than highly urbanized agglomerations. During the period of most intense sunlight, the temperature near a solar power plant can be up to 3 degrees Celsius higher than the temperature in a similar environment without solar panels and similar solar conditions.

    Suggestion:

    The similarity in heat generation between the two cases - densely populated metropolitan areas and solar power plants - suggests the same effect - warming the air over a larger area.
  2. Correlation between Urban Heat Island (UHI) effect and number of heat waves (HW)

    Due to lack of access to data, I have to rely on visual comparisons (if anyone knows data to analyze or can make it common, please contact me).
    Below I illustrate 2 cases: USA and Europe (Germany in particular).

Conclusions:

Without access to detailed data, it is difficult to conduct a more detailed analysis of the correlation between the number of the HW and the increase in electricity produced by the growing number of solar plants.
However, the suggestion given by the available data presented above is at least worth a closer analysis.

The ideas in the European document "Fit for in the a Solar Future: Commission climate package is landmark achievement but more ambition is possible" could prove devastating .



Take care

Sunday, August 8, 2021

Are solar power plants really green energy ?

Are solar power plants really a good solution for energy production ?

Everywhere you hear it's a clean way to get energy. So let's see if it really is.

Introduction:


As an introduction, a few words about how the sun heats the earth and how a solar panel works.
The infrared part of the solar spectrum (wavelength > 700 nm, about 50% of the energy) is directly responsible for heating the Earth's surface and air. This kind of solar radiation exposure on the Earth is considered normal.
Photovoltaic (PV) panels operate in the visible part of the solar spectrum: from 350 nm to 750 nm (approximately). It is this part of the solar spectrum that does not normally heat the environment (the part between 700 and 750 nm does). Energy from this range of radiation is partially (14-22%) converted into electrical energy and the rest (78-86%) is converted into thermal energy.
PVs thus act as a converter of the visible part of the sunlight spectrum (not infrared) to heat (infrared). In other words, they increase the amount of heat compared to the transmitted infrared portion of sunlight. To simplify the estimation, let's assume that 16% of the energy consumed by PV is converted into electrical energy. The rest is dissipated in a form of heat. Data about the operational power produced by solar panels are given in units of electric power generated by Photovoltaic (PV) systems. I.e. 84% of the energy dissipated into heat is not included in these values.

Story nr 1 - local:


The first bad effect, fully local one, is a local increase of temperature near solar power plants The Photovoltaic Heat Island Effect: Larger solar power plants increase local temperatures. Another interesting article about a super solar power plant in the Sahara, taking into account the local effects of large heat dissipation around solar panels was written by Jack Marley Solar panels in Sahara could boost renewable energy but damage the global climate – here’s why.

Story nr 2 - global:


Now let's try to look at things globally.
the number of solar panels on earth is growing almost exponentially every year. According to the Renewable Capacity Statistics 2021 website, at 12.2.2021 the world had 714 GW of operational Photovoltaic (PV) systems. Let's try to translate this value into a carbon footprint by treating all operating PV systems as one.
Some assumptions at the beginning:
  1. 80% of initial radiation is dissipated by the solar panel into heat.
  2. 1 kW of Solar Panel System covers an area about 8 m2 .
  3. Solar irradiance: the averaged over the year and the day, the Earth's atmosphere receives radiation 340 W/m2 from the sun https://en.wikipedia.org/wiki/Solar_irradiance. The PV systems are distributed across the earth, so I assume that the average solar radiation used in the calculations is: 150 W/m2
  4. Average equivalent of the carbon footprint of the 1 kWh as 0.5 . Obviously, we have different CO2 emission intensity for different countries per 1 kwh. The value 0.5 corresponds to the average values over sunny countries.
    More detailed data by country and region is available on the website https://www.carbonfootprint.com/.
  5. Conversion from W to kWh: 1 W == 0.001 kWh


The 714 GW of operational PV systems creates a total surface (St) equal to: St = 5712000000000 m2 = 5 712 000 km2 .
It corresponds to the area size between India (3 287 263 km2) and Australia (7 741 220 km2). Going furthermore, considering the surface of the earth, the surface of the solar panels is 1.1% of its surface. Since we are talking about energy produced by the operational PV systems, we can assume that this is 16% of the energy converted into electricity. Therefore, the dissipated energy into the heat energy (Ht) produced at the same time is:

Ht = 714 [GW] (84 [%]/16 [%]) = 3748.5 [GW].
Now let's calculate the carbon footprint of this amount of heat. Using the conversion from W to kWh (1 W == 0.001 kWh), our amount of heat energy (Ht) is equivalent to 3 748 500 000 kWh ~ 3.75 GWh .

This value corresponds to the carbon footprint (assumption that the carbon footprint of the 1 kWh is 0.5):.
1874250000 kg CO2 /hour.
or
16 418 430 000 000 kg CO2 / year ~ 16.4 GT /year.
This is a huge value and corresponds to the 52% (!) of total CO2 emission in 2020 (31.5 GT / year) ! Thus, we have an unexpected situation because it looks like solar panels are far worse at producing energy than fossil fuels. Let's see a comparison of the increase in operational energy of PV systems to the change in global temperature anomaly as a function of years. Temperature data from https://climate.nasa.gov/vital-signs/global-temperature/.
Please note the different scales of the data presented in the figure. The operational power generated by solar panels is shown in red, the temperature anomalies in blue. Almost perfect correlation !

Summary:


  1. Is CO2 really responsible for warming the earth ?
  2. The correlation between temperature anomalies and the amount of energy produced by PV systems is surprising to say the least !
  3. By building PV systems, we create smaller or larger heat islands around them, disturbing the natural energy balance in such an area. The PV systems produce more than 4 times more thermal energy than they produce in the form of electricity.
  4. By producing solar panels we pollute our environment (+ the need to recycle).


The final conclusions rather indicate that solar power plants do more damage than conventional ones.

Now, the natural question is whether we are already seeing a correlation of climate change with the increase in heat islands around solar power plants.
  1. Is there a correlation between the heat energy produced by the increasing number of solar plants and the increase in air temperature via the Heat Islands effect ?
  2. Is there a correlation between the frequency of Heat Waves and the thermal energy produced by solar power plants ?
About this there is the next text Are solar power plants really green energy ? Continuation

I would be grateful if someone could point out to me the error I am making in the above approximations.

Take care

Thursday, July 15, 2021

Global Real Estate market: a non-expert view

About:
Most analyses of real estate prices compare their behavior over time with other economic indicators, but this is done for a specific country or independently for a group of countries. This text proposes a comparison of real estate prices between different countries by calculating the correlation between them.

Data:
I came across some data on real estate prices (https://data.world/finance/international-house-price-database). This data contains values of 4 quantities (with short descriptions found in wikipedia and other sources):
  1. the house price index (HPI):
    measures the price changes of residential housing as a percentage change from some specific start date (starting in 1975).
  2. the house price index expressed in real terms (RHPI):
    the deflated house price index (or real house price index) is the ratio between the house price index (HPI)
  3. the personal disposable income index (PDI):
    measures the after-tax income of persons and nonprofit corporations.
  4. It is calculated by subtracting personal tax and nontax payments from personal income.
  5. the personal disposable income expressed in real terms index (RPDI) :
    the deflated PDI.


Analysis:
As input we have time series with specified quantity $Q$ (HPI, RHPI, PDI or RPDI) for N (N=24) countries ( 'Australia', 'Belgium', 'Canada', 'Switzerland', 'Germany', 'Denmark', 'Spain', 'Finland', 'France', 'UK', 'Ireland', 'Italy', 'Japan', 'S. Korea', 'Luxembourg', 'Netherlands', 'Norway', 'New Zealand', 'Sweden', 'US', 'S. Africa', 'Croatia', 'Israel', 'Slovenia'). In order to calculate correlations between countries I do the following calculations:
  1. for a given quantity $Q$ I normalize all data independently to the range [0,1],
  2. I determine the two-site correlation function for each timestamp $t$ \begin{equation} \label{1} Corr_{country, another\_country} \left(t \right) = Q_{country}\left(t \right) Q_{another\_country}\left(t \right) \end{equation} which is used finaly, for calculation of the global correlation for each country: \begin{equation} \label{2} C_{country}\left(t \right) = \frac{\sum_{another\_country=1}^{N} Corr_{country, another\_country} \left(t \right) }{N} \end{equation}

The dynamics of the thus calculated correlation function $C_{country}\left(t \right)$ for the quantity $Q=$RHPI and for all countries is shown in Figure 1. For the quantity RHPI, the correlations are most apparent.
The financial crisis of 2008 is very well visible in this figure (the yellow cylinder between 2007 and 2008).

A couple of observations for the time period 2007-2008:
The longest increase of the value of RHPI is seen for the US, lasting since about 1995. Other countries behave in a weakly correlated way during this period. A strong correlation between countries starts to be visible from about 2005 and quickly increases until the crash around 2007-2008. It looks as if most countries joined the global real estate market at the same time (around 2005) and at a given signal decided to crash - "ready, steady, crash!". The market seems to be too well orchestrated. I know that some people will say that this is a normal behavior because it is a global market, all markets are interconnected, etc. However, please note that it is hard to see any trace of the dot-com crisis of 2000-2003 (dot-com bubble) in this picture. Another comment on the picture concerns the number of countries participating in the crash. There are some countries excluded (Japan, S. Korea, Israel) or weakly participating in the process (Australia, Canada, Switzerland, Germany, New Zealand, Sweden, Croatia). Altogether, 13 countries out of 24 are affected by the crash.

Observations for the time period 2008-now:
The first observation is that more countries are now correlated (and not because of COVID). Still outside the correlated market are Japan, S. Korea and Croatia. Spain, Italy are not correlated (accident at work?).

Summary:
the management of the real estate market is becoming more and more concise (only one player? ): currently 19 countries out of 24 (crisis 2008: 13 countries).
Question: when will this player decide to make another crash?

If anyone has knowledge of more data I would be grateful for providing it.

Friday, May 14, 2021

Semantic Search: Too many or too few matching pairs ? Dynamically determined selection threshold for matched query pairs

In my recent projects on applying Natural language processing (NLP) methods, a large part is based or contains parts based on semantic search. In a nutshell, we have certain queries (phrases or sentences) on one side and a set of other texts on the other side and our goal is to find the best matching texts to our query. Simply writing, we need to perform semantic search on our data set.

For those who are less familiar with semantic search, let me define the term as:
a kind of lexical comparison of two texts with dominant part of understanding the content of words and phrases, and relations between words or phrases in queries being compared.

While working with semantic search, I encountered a problem with defining the acceptance threshold for my findings. This problem becomes significant when the texts being compared are of significantly different lengths and/or contain significantly different degrees of content. In other words, the problem becomes serious when we deal with the so-called asymmetric semantic search https://www.sbert.net/examples/applications/semantic-search/README.html#symmetric-vs-asymmetric-semantic-search.

In the following, I would like to share a method which allows to dynamically determine the acceptance threshold of found pairs of matched entries. This method may determine the final solution or be a prelude to a more modified version. The project code is available on my Github account "https://github.com/Lobodzinski/Semantic_Search__dynamical_estimation_of_the_cut_of_for_selected_matches".

Let's start describing the method.
  • The data:
    As an experiment I will use reuters data, known as reuters-21578 (https://paperswithcode.com/dataset/reuters-21578). While searching for an answer to our query, we should try to be as precise as possible in formulating the questions. However, sometimes it is not possible. For the purpose of this mini-project, let's formulate our queries in a general form.
    'Behavior of the precious metals market',
    'What is the situation in metal mines',
    'Should fuel prices expect to rise ?',
    'Will food prices rise in the near future ?',
    'I am looking for information about food crops.',
    'Information on the shipbuilding industry'
  • Generation of matched pairs between the queries and the Reuter's texts:
    Our goal is to perform a semantic search. First, we need to generate matching text pairs. In the following I will use the code that is part of the sentence-transformers package "https://github.com/UKPLab/sentence-transformers/tree/master/examples/applications/semantic-search".
  • Similarities and the threshold calculation:
    Having calculated the similarity values, we can move to the main point - choosing the similarity threshold. First, let's look at the similarity plot in the test function for a fixed query ('What is the situation in metal mines ?')
    It is obvious that not all matches shown in the Figure are good (acceptable). So how to choose the threshold value of similarity ?
    The proposed method is fully heuristic and is based on the calculation of the elbow point of the curve of the similarity as a function of the matched text. If we take a look at the examined functional relationship, we can see that this curve (almost always) has an elbow point beyond which the similarity between the found texts and our query changes very slowly. To calculate the "cut off" point (elbow point) I used the KneeLocator package ("https://pypi.org/project/kneed/"). The function KneeLocator ("https://kneed.readthedocs.io/en/stable/parameters.html#s") contains a sensitivity parameter S which can be used to better select our elbow point.
    The following code and its output shows the details of the calculation and its results. For details, please check "https://github.com/Lobodzinski/Semantic_Search__dynamical_estimation_of_the_cut_of_for_selected_matches".
    .

    This part, for each query reads all matched sentences gathered from the Reuters data together with calculated similarities. The threshold is calculated by the function KneeLocator, this part is denoteb by bold text in the code below.
    
    # loop over our list of queries:
    for query in result_df['query'].unique():
    
    	sentences_ = result_df[(result_df['query'] == query)]['sentence'].values
        x = []
        for y_ in sentences_:
        	x.append(y_[:60]+'...')
        
       	
        # similarities between the query and the matched Reuter's texts:
        y1 = result_df[(result_df['query'] == query)]['score'].values
        
        # determne elbow value:  
        x0 = list(range(len(y1)))
                
        kn = KneeLocator(x0, y1, S=1., curve='convex', direction='decreasing') 
        elbow_1 = kn.knee
        print ('Elbow point values:\n tekst_id=', elbow_1, \
                		'; threshold value=',y1[elbow_1])
        
        
     
    Resulting value (the threshold point) is presented on the next Figure :

    So, for our query ('What is the situation in metal mines ?'), we found 14 texts in the Reuter's set. Below I have copied the first 3 and last 2 texts from the set of accepted texts (the whole set is too long to present here). The reader can judge for themselves the similarity between the query and the text.
    For comparison, I have also added the text which is not accepted (15), which is not accepted by this method.
    Accepted texts:
    1
    SIX KILLED IN SOUTH AFRICAN GOLD MINE ACCIDENT Six black miners have been killed and two injured in a rock fall three km underground at a South African gold mine, the owners said on Sunday. lt Rand Mines Properties Ltd>, one of South Africa s big six mining companies, said in a statement that the accident occurred on Saturday morning at the lt East Rand Proprietary Mines Ltd> mine at Boksburg, 25 km east of Johannesburg. A company spokesman could not elaborate on the short statement.
    2
    NORANDA BEGINS SALVAGE OPERATIONS AT MURDOCHVILLE lt Noranda Inc> said it began salvage operations at its Murdochville, Quebec, mine, where a fire last week killed one miner and caused 10 mln dlrs in damage. Another 56 miners were trapped underground for as long as 24 hours before they were brought to safety. Noranda said the cause and full extent of the damage is still unknown but said it does know that the fire destroyed 6,000 feet of conveyor belt. Noranda said work crews have begun securing the ramp leading into the zone where the fire was located. The company said extreme heat from the fire caused severe rock degradation along several ramps and drifts in the mine. Noranda estimated that the securing operation for the zone will not be completed before the end of April. Noranda said the Quebec Health and Safety Commission, the Quebec Provincial Police and Noranda itself are each conducting an investigation into the fire. Production at the mine has been suspended until the investigations are complete. The copper mine and smelter produced 72,000 tons of copper anodes in 1986 and employs 680 people. The smelter continues to operate with available concentrate from stockpiled supplies, Noranda said. Reuter
    3
    NORTHGATE QUEBEC GOLD WORKERS END STRIKE Northgate Exploration Ltd said hourly paid workers at its two Chibougamau, Quebec mines voted on the weekend to accept a new three year contract offer and returned to work today after a one month strike. It said the workers, represented by United Steelworkers of America, would receive a 1.21 dlr an hour pay raise over the life of the new contract and improved benefits. Northgate, which produced 23,400 ounces of gold in first quarter, said that while the strike slowed production, We are still looking forward to a very satisfactory performance. The Chibougamau mines produced 81,500 ounces of gold last year.
    ....
    13
    NORANDA BRUNSWICK MINERS VOTE MONDAY ON CONTRACT Noranda Inc said 1,100 unionized workers at its 63 pct owned Brunswick Mining and Smelter Corp lead zinc mine in New Brunswick would start voting Monday on a tentative contract pact. Company official Andre Fortier said We are hopeful that we can settle without any kind of work interruption. Fortier added that Brunswick s estimated 500 unionized smelter workers were currently meeting about a Noranda contract proposal and would probably vote next week. The mine s contract expires July 1 and the smelter s on July 21. The Brunswick mine produced 413,800 tonnes of zinc and 206,000 tonnes of lead last year at a recovery rate of 70.5 pct zinc and 55.6 pct lead. Concentrates produced were 238,000 tonnes of zinc and 81,000 tonnes of lead.
    14
    COMINCO lt CLT> SETS TENTATIVE TALKS ON STRIKE Cominco Ltd said it set tentative talks with three striking union locals that rejected on Saturday a three year contract offer at Cominco s Trail and Kimberley, British Columbia lead zinc operations. The locals, part of United Steelworkers of America, represent 2,600 production and maintenance workers. No date has been set for the talks, the spokesman replied to a query. The spokesman said talks were still ongoing with the two other striking locals, representing 600 office and technical workers. Production at Trail and Kimberley has been shut down since the strike started May 9. Each of the five locals has a separate contract that expired April 30, but the main issues are similar. The Trail smelter produced 240,000 long tons of zinc and 110,000 long tons of lead last year, while the Sullivan mine at Kimberley produced 2.2 mln long tons of ore last year, most for processing at Trail. Revenues from Trail s smelter totaled 356 mln Canadian dlrs in 1986.


    Not Accepted texts:
    15
    VESSEL LOST IN PACIFIC WAS CARRYING LEAD The 37,635 deadweight tonnes bulk carrier Cumberlande, which sank in the South Pacific last Friday, was carrying a cargo which included lead as well as magnesium ore, a Lloyds Shipping Intelligence spokesman said. He was unable to confirm the tonnages involved. Trade reports circulating the London Metal Exchange said the vessel, en route to New Orleans from Newcastle, New South Wales, had been carrying 10,000 tonnes of lead concentrates. Traders said this pushed lead prices higher in early morning trading as the market is currently sensitive to any fundamental news due to its finely balanced supply demand position and low stocks. Trade sources said that 10,000 tonnes of lead concentrates could convert to around 5,000 tonnes of metal, although this depended on the quality of the concentrates. A loss of this size could cause a gap in the supply pipeline, particularly in North America, they noted. Supplies there have been very tight this year and there is a strike at one major producer, Cominco, and labour talks currently being held at another, Noranda subsidiary Brunswick Mining and Smelting Ltd.
    16
    LTV lt QLTV> TO NEGOTIATE WITH STEELWORKERS LTV Corp s LTV Steel Corp said it agreed to resume negotiations with the United Steelworkers of America at the local plant levels, to discuss those provisions of its proposal that require local implementation. The local steelworker union narrowly rejected a tentative agreement with the company on May 14, it said. LTV also said it agreed to reopen its offer contained in the tentative agreement reached with the union s negotiating committee as part of a plan to resolve problems through local discussions.


    As you can see, the unaccepted texts are not directly related to mining, which is what we are asking about in our query.


Thanks for reading, please feel free to comment and ask questions if anything is unclear.

Wednesday, April 7, 2021

Weird aspects of ARIMA - how to increase the accuracy of predictions by exogenous data

ARIMA models are commonly used to predict time series. Most importantly, if the ARIMA model is properly chosen, the prediction error is often so small that it is a very difficult task to find better predictions using more sophisticated methods. Since this is not a note about an introduction to ARIMA models I replace the typical introduction to the models with this link which describes the method much better: Forecasting: Principles and Practice" (2nd ed) by Rob J Hyndman and George Athanasopoulos (https://otexts.com/fpp2/).
One of the variants of ARIMA models is a version using exogeneous data, to which this note is dedicated.
It is not widely known that this version of ARIMA models is strongly dependent on the factor by which we multiply the exogeneous data. Generalizing, we can say that the larger the factor, the smaller the prediction error of the model.

To begin with, let us start with a heuristic proof that the factor determining the ratio between the exogeneous data and the target, can influence the prediction error of the algorithm.

For simplicity, let us omit the part of the time series description which in ARIMA models is responsible for the differentiable part, i.e. we assume that the parameter $d=0$. In other words, we will use the formulations of the ARMAX model.
Then, a given time series X, in ARMAX, can be expressed generally as: \begin{equation} \label{eq1} X_{t}= c + \epsilon_{t} + AR_{t} + MA_{t} + exog_{t} \end{equation} where $c$: constant, $\epsilon_{t}$: white noise, $AR_{t}$: Autoregression part, $MA_{t}$: Moving average part, $exog_{t}$: exogeneous variable(s).
Now, let's define the exogeneous part $exog_{t}$ as: \begin{equation} \label{eq2} exog_{t} = X_{t} \alpha + \gamma_{t} \end{equation} where $\alpha$ is some proportionality factor, $\gamma_{t}$ - white nose.
Therefore, introducing eq. \ref{eq2} to eq. \ref{eq1} we will get: \begin{equation} \label{eq3} X_{t} = c + \epsilon_{t} + AR_{t} + MA_{t} + X_{t} \alpha + \gamma_{t} \end{equation} And after rearranging some terms \begin{equation} \label{eq4} X_{t}\left(1-\alpha\right) = c + \left(\epsilon_{t} + \gamma_{t}\right) + AR_{t} + MA_{t} \end{equation} But the part $AR_{t}$ can be written as $AR_{t} = \sum_{i}^{p} \phi_{i} X_{t-i}$ and similarly the $MA_{t}$ component $MA_{t} = \langle X \rangle + \beta_{t} + \sum_{i}^{q} \theta_{i} \beta_{t-i}$ with: $\beta_{n}$ as a white noise, $ \langle X \rangle $ - the expectation value of the $X$, $\theta_{i}$ - parameters of the model.
With the above in mind, the eq \ref{eq4} becomes : \begin{equation} \label{eq5} X_{t} = \frac{c + \epsilon_{t} + \gamma_{t} + \alpha \langle X \rangle}{1-\alpha} + \sum_{i} \hat{\phi}_{i} X_{t-i} + \langle X \rangle + \hat{\beta}_{t} + \sum_{i} \theta_{i} \hat{\beta}_{t-i} \end{equation} where I introduced notation: $\hat{\beta}_{t-i} = \frac{\beta_{t-i}}{1-\alpha}$, $\hat{\phi}_{i} = \frac{\phi_{i}}{1-\alpha}$ and $\hat{\theta}_{i} = \frac{\theta_{i}}{1-\alpha}$.
Using definitiona of the $AR_{t}$ and $MA_{t}$ we can rewrite eq \ref{eq5} into the final form: \begin{equation} \label{eq6} X_{t} = \frac{c + \epsilon_{t} + \gamma_{t} + \alpha \langle X \rangle}{1-\alpha} + \hat{AR}_{t} + \hat{MA}_{t} \end{equation} where components $\hat{AR}_{t}$ and $\hat{MA}_{t}$ correspond to the definitions $AR_{t}$ and $MA_{t}$ but with $\hat{\phi}$, $\hat{\theta}$ and $\hat{\beta}$ coefficients.
The first component of Equation \ref{eq6} is the most interesting !.
In the case of ARIMA without exogeneous data, the forecasting error is determined by the $\epsilon$ . Now, this error is replaced by the expression \begin{equation} \label{eq7} \epsilon_{t} \longrightarrow \frac{\epsilon_{t} + \gamma_{t}}{1-\alpha} \end{equation}
This is how we reached our final conclusions:
  1. use exogenous variables that are highly correlated ($\alpha \approx 1.$) or anti-correlated ($\alpha \approx -1.$) with the target is equivalent to a model without exogenous variables (but with changed model parameters).
  2. the use of exog data, scaled by the ratio $\frac{exogenous data}{target}$ allows for a significant modification of the final prediction error of the model. The error is now scaled by the factor $\frac{1}{1-\alpha}$. So
    1. we have the error explosions for $\left|\alpha\right| \approx 1$,
    2. for $\left|\alpha\right| < 1 $: error with exog data $>$ error without exogenous variable,
    3. for $\left|\alpha\right| > 1 $: error with exog data $<$ error without exogenous variable.
The above derivation was made for the simplified case without the differential term (i.e., when $d,D=0$). I leave it to the readers to generalize the above formalism into a model with a differential parts.


The next step will be a verification of these hypotheses in practice. A practical example of the implementation of the discussed hypothesis is available in the form of a jupyter-notebook script: https://github.com/Lobodzinski/ARIMA_with_exogenous_data.
Here, we just present the dependence of the MAPE error as a function of the exog data factor (in the log10 scale). As you can see, by using an appropriate value of the factor we are able to reduce the prediction error by almost half ! The error reaches its minimum value at a factor value of 4000000. Then the error increases. The increasing part after the minimum is reached is not directly visible in the theoretical proof. I will try to explain it in the next part of the article.
All other details are available in the code: https://github.com/Lobodzinski/ARIMA_with_exogenous_data.


Thank you for the reading !

Wednesday, June 22, 2016

Sell - buy signals for financial instruments - a multivariate survival analysis approach (in R).

Survival analysis could be understand as a kind of a forecasting method created for studying how long an analyzed subject will exist. In a survival framework we calculate survival probability of a thing at particular points of time.
And so, it is a natural choice to use the survival method in forecasting of financial market. Especially, for prediction of future price behaviour of a given financial instance and as a consequence for generating buy-sell signals. Searching for some examples of similar analysis I have found only a single work addressing the subject [1]. From this publication I adopted a definition of a daily revenue - given below.

In this note I would like to show how to use the survival analysis method for generation of buy/sell signals for a chosen financial product. As data I use Brent Oil Future options, however the method could be adopted to other instruments as well.

The analysis is presented in R language.
As an input I use historical data of Brent Oil Future options available on the portal stooq.pl. The data contain several features like Open and Close prices, Volume, highest (High) and lowest (Low) prices and Open Interest (OpenInt). Operating on a daily basis, my goal is to predict whether for the next day we expect increase or decrease of the Close value.
Let me begin with preparation of libraries and data engineering.
set.seed(1)
library(ggplot2)
library(lubridate)
library(readr)
library(MASS)
library(survival)
library(xts)
library(rms)

# reading the data:
myDir <- "/analysis/forecasting/data_stooq.pl/"
myFile <- "sc_f_d_2015_2016.csv" 
input <- NULL
input <- read.csv(paste(myDir,myFile,sep=""),header=TRUE)

names(input)
[1] "Date"          "Open"          "High"          "Low"           
[5]"Close"         "Volume"        "OpenInt"
In a next step I will introduce new features:
  1. $corp = Close - Open$ : it corresponds to the definition of the corp of the candlesticks,
    input$corp <- 0 
    input$corp <- input$Close - input$Open
    
  2. Exponential Moving Average - $EMAV$. For this purpose I use functions for calculation of the Volume Weighted Exponential Moving Average (see volume-weighted-exponential-moving-average)
    VEMA.num <- function(x, volumes, ratio) {
            ret <- c()
            s <- 0
            for(i in 1:length(x)) { 
                s <- ratio * s + (1-ratio) * x[i] * volumes[i];
                ret <- c(ret, s); 
            }
            ret
    }
    
    VEMA.den <- function(volumes, ratio) {
            ret <- c()
            s <- 0
            for(i in 1:length(x)) { 
                s <- ratio * s + (1-ratio) * volumes[i]; 
                ret <- c(ret, s); 
            }
            ret
    }
    
    VEMA <- function(x, volumes, n = 10, wilder = F, ratio = NULL, ...)
    {
            x <- try.xts(x, error = as.matrix)
            if (n < 1 || n > NROW(x))
                    stop("Invalid 'n'")
            if (any(nNonNA <- n > colSums(!is.na(x))))
                    stop("n > number of non-NA values in column(s) ", 
                    paste(which(nNonNA),collapse = ", "))
            x.na <- xts:::naCheck(x, n)
            if (missing(n) & !missing(ratio)) {
                    n <- trunc(2/ratio - 1)
            }    
            if (is.null(ratio)) {
                    if (wilder)
                            ratio <- 1/n
                    else ratio <- 2/(n + 1)
            }
            
            foo <- cbind(x[,1], volumes, VEMA.num(as.numeric(x[,1]), volumes, ratio), 
                         VEMA.den(volumes, ratio))
            (foo[,3] / foo[,4]) -> ma
            
            ma <- reclass(ma, x)
            if (!is.null(dim(ma))) {
                    colnames(ma) <- paste(colnames(x), "VEMA", n, sep = ".")
            }
            return(ma)
    }
    
    x <- ts(input$Close)
    input$EMAV <- 0
    Vol <- rep(1, length(input[,1]))   
    input$EMAV <- VEMA(input$Close, Vol,ratio=0.95)
    plot(input$Close,type="l");lines(input$EMAV,col="red")
    
  3. Daily Revenue - $DailyRevenue$ . The feature is calculated as a percentage of $Close$ values between present and previous days [1]
    $DailyRevenue = \frac{( Close[k] - Close[k-1] )}{Close[k-1]} \cdot 100$
    input$DailyRevenue <- 0
    input$DailyRevenue[1] <- 0
    for (k in (2:(length(input$Close)))) {
         input$DailyRevenue[k] <- (input$Close[k]-input$Close[k-1])*100/(input$Close[k-1])
    }
    
  4. Censors:
    In this case I consider two series of censored data corresponding to the increasing and decreasing values of $DailyRevenue$. The increasing or decreasing event is detected by checking whether the Daily Revenue is larger or smaller than a given threshold value. The threshold value could be used as a parameter in our model. Therefore, the value of the $threshold$ could be different from $0$ ($0$ is here a good initial choice). The censoring of data used in the model is following:
    1. censors specific for increasing daily revenue: $censorUp$: if the daily revenue is higher than $threshold$: $censorUp = 0$ (censored point of time). In case when the daily revenue being smaller or equal to $threshold$: $censorUp = 1$ .
      Threshold <- 0
      input$censorUp <- 0
      input[input$DailyRevenue <= Threshold,12] <- 1
      
    2. censors specific for decreasing daily revenue: $censorDown$. In this case the events are opposite to the $censorUp$ feature. The daily revenue is smaller or equal to $threshold$: $censorDown = 1$ . the daily revenue is larger than $threshold$: $censorDown = 0$ .
      input$censorDown <- 0
      input[input$DailyRevenue > Threshold,11] <- 1
      
  5. Time to events: are measured as a time distance between last local extremum of the $DailyRevenue$ and an event defined by censoring series (censored or NOT censored).
    1. $TimeToEventUp$:
      it is a time measured from a last local minimum of the $DailyRevenue$ to the $censorUp = 1$ events or the time distance between the last local maximum of the $DailyRevenue$ to the $censorUp = 0$.
      locmylastmin <- 1
      locmylastmax <- 1
      
      for (k in (2:(dim(input)[1]))) {
           tmp <- input$DailyRevenue[1:k]
           if (input$censorUp[k] == 0) {
                 # find all local minima:
                 locmymin <- which(diff(sign(diff(tmp)))==+2)+1
                 if (length(locmymin) == 0) {locmymin <- which.min(tmp)}
                 locmylastmin <- locmymin[length(locmymin)]
                 dateMin <- input[locmylastmin,1]
                 input$TimeToEventUp[k] <- as.integer(as.Date(input[k,1])-
                                                      as.Date(dateMin))
           }
           if (input$censorUp[k] == 1) {
                 # find all local maxima:
                 locmymax <- which(diff(sign(diff(tmp)))==-2)+1
                 if (length(locmymax)==0) {locmymax <- which.max(tmp)}
                 locmylastmax <- locmymax[length(locmymax)]
                 dateMax <- input[locmylastmax,1]
                 input$TimeToEventUp[k] <- as.integer(as.Date(input[k,1])-
                                                      as.Date(dateMax))
           }
      }
      
    2. $TimeToEventDown$:
      is calculated in an opposite way to the feature $TimeToEventUp$. Details of determination of the $TimeToEventDown$ feature is shown on Fig. 1 .
      locmylastmin <- 1
      locmylastmax <- 1
      
      for (k in (2:(dim(input)[1]))) {
           tmp <- input$DailyRevenue[1:k]
           if (input$censorDown[k] == 1) {
                 # find all local minima:
                 locmymin <- which(diff(sign(diff(tmp)))==+2)+1
                 if (length(locmymin) == 0) {locmymin <- which.min(tmp)}
                 locmylastmin <- locmymin[length(locmymin)]
                 dateMin <- input[locmylastmin,1]
                 input$TimeToEventDown[k] <- as.integer(as.Date(input[k,1])-
                                                        as.Date(dateMin))
           }
           if (input$censorDown[k] == 0) {
                 # find all local maxima:
                 locmymax <- which(diff(sign(diff(tmp)))==-2)+1
                 if (length(locmymax)==0) {locmymax <- which.max(tmp)}
                 locmylastmax <- locmymax[length(locmymax)]
                 dateMax <- input[locmylastmax,1]
                 input$TimeToEventDown[k] <- as.integer(as.Date(input[k,1])-
                                                        as.Date(dateMax))
           }
      }
      
      Definition of the variable $TimeToEventDown$ demonstrated on a part of the $DailyRevenue$ time series. It is a sequence of time distances $T_{Down}^{censored}$ and $T_{Down}^{event}$, where the time $T_{Down}^{censored}$ is a time measured from a last local maximum of the $DailyRevenue$ to the $censorDown = 0$ and $T_{Down}^{event}$ - is the time distance between the last local $DailyRevenue$ minimum to the $censorDown = 1$.

In the next step I build the signal creator.
The basic part is realized in the standard way of calculating of the survival probabilities corresponding to the both series $censorUp$ and $censorDown$. The prediction is calculated by the function survfit without newdata object. In such a case the predicted survival probabilities are calculated on a base of mean values of all covariates included in the formula used for creation of the Cox hazards regression model fit. In our case I use all available covariates in implemented formula, except the $Volume$ feature which is not already a solid number.
formula = Surv(TimeToEventDown, censorDown) ~  
               High + Low + Close + corp + EMAV + DailyRevenue + OpenInt
The code of the function $SurvivalProb$:
SurvivalProb <- function(myinput,What="Down") {        
    train <- myinput
    if (What == "Down") {
        coxphCorp <- coxph(Surv(TimeToEventDown, censorDown) ~  
                           High + Low + Close + corp + EMAV + DailyRevenue + OpenInt, 
                           data=train, method="breslow")
    }
    if (What == "Up") {
        coxphCorp <- coxph(Surv(TimeToEventUp, censorUp) ~  
                           High + Low + Close + corp + EMAV + DailyRevenue + OpenInt, 
                           data=train, method="breslow")
    }
    mysurffit<-survfit(coxphCorp, se.fit = F, conf.int = F)
        
    tmp<-summary(mysurffit)
    myprob <- tmp$surv[1]
        
    myprob
}
The function predicting the signal sell/buy ($SignalPredictor$) calculates it as a ratio between probabilities calculated for the next days related to the increased and decreased daily revenue accordingly. The ratio is defined as: $ratio = \frac{Survival-Probability-that-daily-revenue-will-be-higher-next-day}{Survival-Probability-that-daily-revenue-will-be-smaller-next-day} - \left(1+\epsilon\right)$ where $Survival-Probability-that-daily-revenue-will-be-higher-next-day$ (denoted as $probUp$ in the code) and $Survival-Probability-that-daily-revenue-will-be-smaller-next-day$ ($probDown$ in the code) are calculated by the function $SurvivalProb$. The constant $1+\epsilon$ corresponds to the $ratioValue$ inside the code.
The code of the function $SignalPredictor$ :
SignalPredictor <- function(train,ratioValue) { 

    # Survival-Probability-that-daily-revenue-will-be-smaller-next-day  
    ProbDown <- SurvivalProb(train,"Down")  
           
    # Survival-Probability-that-daily-revenue-will-be-higher-next-day
    ProbUp <- SurvivalProb(train,"Up")      
        
    # signal: + <- buy; - <- sell
    signal <- ((ProbUp/ProbDown)-ratioValue)
    signal 
}
As a practical test how well the model creates buy-sell signals I run the signal predictor in a loop over available dates in the set of historical data of Brent Oil Future. In the code below, the $Dist$ variable denotes the size of the training sample. For a given row $k$ the model use $Dist$ rows in the data as a train sample and makes a prediction for the day: $k + Dist + 1$ ($SignalPredictor$). In the next iteration the model predicts signal for the next day $k + Dist + 2$, using data of the previous day $k + Dist + 1$ as a part of the train sample.
# daily step:
Dist <- 50 
ratioValue <- 1.02
Bias <- 0 
startInd <- Dist
results <- NULL        
results$Signal <- 0

for (k in seq((startInd+Bias+1),length(input[,1]),by=1)) {
     print(as.Date(input[k,1]))
     train <- input[(k-Dist):k,]
     train <- Optimize(train,ratioValue)

     ratio <- SignalPredictor(train,ratioValue)        
     results$Signal <- c(results$Signal,ratio)
}
Inside the loop I use another function $Optimize$. The $Optimize$ function takes the train sample and selects its last row as test data. In the next steps the function selects such a size of the train sample that properly predicts the sign of the $DailyRevenue$ feature from the test sample (e.g. the signal sell/buy corresponds to the sign of the $DailyRevenue$ variable). The resulting size of the train sample can be smaller or equal to the initial train size. The $Optimize$ function:
Optimize <- function(inputData,ratioValue) {
    L <- dim(inputData)[1]
    mydist <- 1
    mycheck <- -1
        
    repeat {
            mytest <- inputData[L,]
            mytrain <- inputData[mydist:(L-1),]
                
            probDown <- SurvivalProb(mytrain,"Down")
            probUp <- SurvivalProb(mytrain,"Up")
                
            ratio <- (probUp/probDown)-ratioValue

            if ( (sign(ratio)*mytest$DailyRevenue >= 0) ) {mycheck <- 1}    
            if (mycheck == 1) {break} 
            if (mydist >= (L-10)) {
                 mycheck <- 1;mydist <- 2;break
            }
            mydist <- mydist + 1
           }
    mydist <- mydist - 1   
    inputData[mydist:L,]
}
The results (sell=buy signals) are shown on the next plot (Fig.2) for an initial values of parameters:
Dist <- 50 
ratioValue <- 1.02
Threshold <- 0
and for the date period: 2015-03-13 - 2016-06-16
Fig. 2
In the Fig. 2 the green parts correspond to the positive values of the ratio where the calculated probability for increasing $Close$ price is larger than for decreasing one. The red part shows the opposite behaviour.
The best signal buy/sell is generated when the ratio changes the sign:
from red to green : buy, from green to red: sell .
The black curve presents the $Close$ values, the blue line represents the values of $EMAV$.
By manipulating with the above parameters ($Dist$,$ratioValue$, $Threshold$ and $EMAV$) one can create different strategies. Using an initial capital equal to the $Close$ value at the initial date of the game, the total income generated by buy and sell signals for the above period and parameters is $16.47$ USD what corresponds to the $41.1$ % of profit from the considered period of time (2013-03-13 - 2016-06-17) .

Conclusions:


If you place capital into financial market it is always better to use several independent models for prediction of behaviour of financial instruments. If most of the methods are coming to the same conclusions, one can decide to take action: to sell or to buy the instrument(s).
The presented model could be treated as an additional source of such a knowledge. From technical point of view the script is easy to adopt to any financial data and has a lot of room for improvement. The most important part of the algorithm is the definition of censors ($censorUp$,$censorDown$) and "time to event" ($TimeToEventUp$,$TimeToEventDown$). I would appreciate any hints concerning improvement of the algorithm.

Hope, the note could be used as an introduction for survival analysis in a financial market. Comments (and links) are very welcome.


Thanks for reading !
Bogdan Lobodzinski
Data Analytics For All Consulting Service

[1] Guangliang Gao, Zhan Bu, Lingbo Liu, Jie Cao, Zhiang Wu: A survival analysis method for stock market prediction. BESC 2015: 116-122

Code for generation Fig. 2:
shift <- 47
factorSig <- 21
Sig <- sign(results$Signal)
L<-length(Sig)
Sigdf <- data.frame(x=input[(startInd+Bias):length(input[,1]),1],Sig=Sig*factorSig) 
JustLine <- data.frame(xTime=input[(startInd+Bias):length(input[,1]),1],Line=0*c(1:L)+shift) 
titleStr <- paste("Brent Oil Future: period from ",as.Date(input[(startInd+Bias+1),1]),
                  " to", as.Date(input[(length(input[,1])),1]))

ggplot(Sigdf, aes(x=as.Date(x), y=Sig+shift)) +
     geom_line() + 
     geom_ribbon(aes(ymin=shift, ymax=ifelse(Sig+shift>shift,Sig+shift,shift)),
          fill = "green", alpha = 0.5,colour = "green") +
     geom_ribbon(aes(ymin=ifelse(Sig+shift < shift,Sig+shift,shift),ymax=shift),
          fill = "red", alpha = 0.5,colour="red") +        
     geom_line(data = JustLine, aes(x=as.Date(xTime), y=Line)) + 
     geom_line(data = input[(startInd+Bias+1):length(input[,1]),], 
          aes(x=as.Date(input[(startInd+Bias+1):length(input[,1]),1]), y=Close)) +
     geom_line(data = input[(startInd+Bias+1):length(input[,1]),], 
          aes(x=as.Date(input[(startInd+Bias+1):length(input[,1]),1]), y=EMAV, colour=4)) +        
     ggtitle(titleStr) +
     labs(x="Date", y="Price in USD") +guides(colour=FALSE)

Friday, May 6, 2016

Mainstream media and shaping social opinions

Inspired by a question "What can we learn about social behaviour from recommender database ?" which arises from my previous note Recommender Systems and q-value Potts model I spent some time considering a following question:
The question:
how to control opinions in a social group by creation of an environment with whom the group can interact ?

Let us start to define the toy model.
In a given social group we have a set of opinions $K$ distributed among all group members. The group collaborates with external world through an interaction with some set of opinions stated in an external environment. We assume that the environment is too big to be manipulated by the social group, but the environment can modify somehow the distribution of opinions inside the group.
For a better transparency we consider 2 different opinions described as $A$ and $B$. Opinions $A$ and $B$ are defined as states "+1" and "-1" correspondingly. Members are distributed among two subgroups according to a shared opinion and interact with the same environment but with different strength: $\gamma_{A}$ and $\gamma_{B}$ respectively. The $\gamma_{A|B}$ coefficients correspond to the level of group members acceptance of noise created by the environment. Or, in other words, the coupling constants $\gamma_{A|B}$ are weights which characterize how much a given group relies on opinions supported by the environment. The level of believe in a group is proportional to the value of $\gamma_{i}$ couplings. The scheme is depicted in the picture 1.


fig. 1. The environmental $M$ levels are coupled to 2 separate systems $A$ and $B$ with a given coupling constants $\gamma_{A}$ and $\gamma_{B}$. For simplicity each group $A$ and $B$ has the same number of levels $N$.







We assume that the social model will fulfill the principle of least action so the entire set will tend to minimize the total energy. Therefore the Hamiltonian approach should make a frame for our analysis. The model can be described by the following Hamiltonian: \begin{equation} \label{Hamiltonian0} H = H_{A} + H_{B} + H_{E} + H_{Int} \end{equation} where
\begin{equation} \label{Hamiltonian1} H_{A|B} = \sum_{k_{A|B} = 1}^{N_{A|B}} E_{A|B} \left| A|B_{k_{A|B}} \right> \left< A|B_{k_{A|B}} \right| \end{equation} describes the state of subgroups $A$ and $B$ with energies $E_{A}$ ($E_{A}$) and $N_{A}$ ($N_{A}$) discrete states $\left| A_{k_{A}} \right>$ ($\left| B_{k_{B}} \right>$). \begin{equation} \label{Hamiltonian2} H_{E} = \sum_{n = 1}^{M} E_{n} \left| E_{n} \right> \left< E_{n} \right| \end{equation} is the basic energy of the environment with $M$ discrete states $\left| E_{n} \right>$ labeled by index n. The interaction between the groups and the environment has a form: \begin{equation} \label{Hamiltonian3} H_{Int} = \sum_{n=1}^{M} \left[ \left( \sqrt{ \gamma_{A} } \sum_{ k_{A} = 1}^{ N_{A} } VA_{k_{A}}^{n} \left| A_{k_{A}} \right> \left< E_{n} \right| + h.c. \right) + \left( \sqrt{ \gamma_{B} } \sum_{ k_{B} = 1}^{ N_{B} } VB_{k_{B}}^{n} \left| B_{k_{B}} \right> \left< E_{n} \right| + h.c. \right) \right ] \end{equation} where $VA$ and $VB$ are matrix elements describing couplings between group states $\left| A \right>$, $\left| B \right>$ and environmental levels $\left< E_{n} \right|$.
Because the environment cannot be modified by any group we can use the Markovian approximation and eliminate the environment states from the Hamiltonian above. The result of operation can be written in the following form: \begin{equation} \label{Hamiltonian4} H_{eff} = H_{A} + H_{B} - i V \cdot V^{+} \end{equation} where \begin{equation} \label{Hamiltonian5} V = \left( \begin{array}{} \sqrt{ \gamma_{A} } \cdot VA \\ \sqrt{ \gamma_{B} } \cdot VB \end{array} \right) \end{equation} creates a dissipative part of the effective Hamiltonian $H_{eff}$ which is an $N \times N$ dimensional matrix and matrix $V$ has a dimension $N \times M$ . Our analysis of the dynamics of the system can be reduced to the determination of the eigenvalues of the effective Hamiltonian $H_{eff}$: $\Lambda = x - iy$ which are complex. An imaginary part describes how fast (time $\tau$) a given eigenvalue dissipates in the system: $ \tau \approx \frac{1}{y}$.
In other words, the $\tau$ describes the lifetime of a given opinion values by the real value of the eigenvalue $x$ . Let us stress that we allow to create a continuous number of opinions, not only those which are defined in the assumptions of the issue: $E_{A|B}$ . Why can the eigenvalues of the Hamiltonian be used as a determinant of the social opinion distribution ?
A standard model of opinion evolution is modeled using (in a simplest case) a set of linear equations: \begin{equation} \label{Hamiltonian6} \vec{x} \left( t + 1 \right) = W \vec{x} \left( t \right) \end{equation} where $W$ is some $N\times N$ matrix describing an information exchange with weights between participants of the social network and the vector $\vec{x}$ is an opinion profile in the network calculated for the time $t$. Thus, the correspondence between the opinion profile $\vec{x}\left( t \right)$ and our approach can be done by calculation of a power spectrum of the $\vec{x}\left( t \right)$. Positions of maximums in the power spectrum can be understood as equivalent to the real values of eigenvalues of the Hamiltonian $H_{eff}$ while the time scale of change of the profile $\vec{x}\left( t \right)$ corresponds to the imaginary part of the eigenvalues (to be more specific to the inverse of the imaginary part).
For any numerical calculations we have to define values $M$, $N$, $E_{A}$, $E_{B}$, $N_{A}$, $N_{B}$, $\gamma_{A}$, $\gamma_{B}$. The matrices $VA$ and $VB$ are calculated as Gaussian unitary ensembles (GUE random matrices).
Below we present a variety of plots of numerically determined eigenvalues of the Hamiltonian.
On all plots the dots are a result of numerical calculations of the Hamiltonian $H_{eff}$. Numerical simulations have been done for 30 GUE matrices $V$ with $N = 100$ and $M = 60$. Remaining values of parameters used for simulations:
  1. energies: $E_{A} = -1/2$, $E_{B} = 1/2$,
  2. number of states: for state $A$: $N_{A} = 50$, for state $B$: $N_{B}=50$,
  3. interaction couplings are described for each plot separately.

At the beginning, for a comparison with existing models we shows on Fig. 2,3 and 4 a situation where $\gamma_{A} = \gamma_{B}$ and different values of $\gamma_{B}$. A similar plots one can find in the work [1].
For a better visibility we used for scale $y$ logarithmic scale. In our case $y \leftarrow -log( \left| y \right| )$, so the value $y=-3$ corresponds to $y = 10^{-3}$.


Fig. 2. Simulation for $\gamma_{A} = \gamma_{B} = 0.0025$ in energy units. The left vertical plot shows the integrated density profile of the width of calculated eigenvalues. The upper picture presents the integrated spectral density showing the width of energy states.


Fig. 3. Simulation for $\gamma_{A} = \gamma_{B} = 0.01$. An creation of two timescales of the eigenvectors is visible.


Fig. 4. Simulation for $\gamma_{A} = \gamma_{B} = 0.5$.

















The next Figs (5,6,7 and 8) shows asymmetric situations where $\gamma_{A} = \gamma_{B}/10$ and different values of $\gamma_{B}$. Detailed values of other parameters are noted in the figure description.


Fig. 5. The situation with $\gamma_{B} = 0.01$ and $\gamma_{A} = \gamma_{B}/10$.


Fig. 6. The eigenvalue spectrum calculated for $\gamma_{B} = 0.05$ and $\gamma_{A} = \gamma_{B}/10$.


Fig. 7. The spectrum calculated for $\gamma_{B} = 0.1$ and $\gamma_{A} = \gamma_{B}/10$.


Fig. 8. The spectrum calculated for $\gamma_{B} = 0.5$ and $\gamma_{A} = \gamma_{B}/10$.




















Conclusions


The landscape presented on all plots of eigenvalues of the Hamiltonian $H_{eff}$ is quite straightforward for those who are familiar with atomic physics, especially interactions of multi-level atomic transitions with resonant light. We see clear existence of so called coupled and uncoupled level (states) combinations . The increase of the coupling constant $\gamma_{A|B}$ leads to a creation of grouping of eigenvalues.
The eigenfunctions in the Hamiltonian can be formed in such a way that some combinations of levels cancel the interaction part with the environment - they are called uncoupled (or trapped) states. The number of such states is $N-M$.
Other level eigenfunctions are coupled with the environment's states - such an eigenfunction we call coupled (or un-trapped) states ($M$ decaying states). Populations of trapped states can survive longer time (smaller values of y) in comparison to the un-trapped states, where the populations is exchanged between coupled states by the interaction term proportional to $\sqrt{\gamma_{A|B}}$.
Let us try to translate this physical picture into a social language.
  1. The coupled (un-trapped) states: configurations of members (eigenfunction) which change a common opinion (eigenvalue) faster due to the interaction with a noisy environment represented by a different set of environmental opinions. Such an exchange of opinion is proportional to the interaction strength $\gamma_{i}$ ($i=A|B$).
  2. The uncoupled (trapped) states: ideally, such configurations of members (eigenfunction) which create a stable combinations of a group members. They do not interact with the environment.
If we define an opinion power as a parameter proportional to the number of eigenfunctions which share the same range of opinions we see that in steady-state conditions the opinion created by the trapped states (i.e. weakly coupled with the environment) has a greater power of persuasion than an opposite social group characterized by the un-trapped member's configurations (i.e. strongly coupled to the environment).
A higher level of noise (larger value of $\gamma_{A|B}$ and different values of couplings $\gamma_{A} < \gamma_{B}$) doesn't lead to reduction of a significance of unwanted opinions but creates just the opposite action:
it sharpens the polarization of existing opinions and increases the importance of resistance against the environment in a society. Such a situation can be well identified nowadays. The noisy environment is created by mainstream media. What one can find in such an environment: an increasing number of different subjects, news focused on accidents, preferred shorter forms and propagation of a number of opposite opinions and many other similar, as well as different ways of distraction of reader's attention.
Groups of people strongly coupled to such a media stream are not able to define a private opinion and they are easy to manipulate. It also means that the number of people in such a chaotic environment will migrate rather to more stable social groups (trapped states), not so strongly dominated by the mainstream news.
The creation of people without a strong, private opinions is a goal of the present liberal governments. But, as it is presented in this note the chosen way of social control leads just to opposite behaviour. You can see it around yourself, don't you ?
In the model, we used the condition where the number of intermediate states in the environment $M$ is smaller that the number of states in considered social groups $N$: $M < N$.
The opposite case leads to a bit different picture than described in this note, but it is a subject for an another post.


Bogdan Lobodzinski
Data Analytics For All Consulting Service

References:

  1. E.Gudowska-Nowak, G. Papp and J. Brickmann, Two-Level System with Noise: Blue's Function Approach, Chem. Phys., 220 120-135 (1997)