The Scientific Method
https://en.wikipedia.org/wiki/Scientific_method
Horizontal Equations
https://www.youtube.com/watch?v=TBuC3dEeAxU
First Principles
https://en.wikipedia.org/wiki/First_principle
In philosophy and science, a first principle is a basic proposition or assumption that cannot be deduced from any other proposition or assumption. First principles in philosophy are from first cause[1] attitudes and taught by Aristotelians, and nuanced versions of first principles are referred to as postulates by Kantians.[2]
In mathematics and formal logic, first principles are referred to as axioms or postulates. In physics and other sciences, theoretical work is said to be from first principles, or ab initio, if it starts directly at the level of established science and does not make assumptions such as empirical model and parameter fitting. "First principles thinking" consists of decomposing things down to the fundamental axioms in the given arena, before reasoning up by asking which ones are relevant to the question at hand, then cross referencing conclusions based on chosen axioms and making sure conclusions do not violate any fundamental laws. Physicists include counterintuitive concepts with reiteration.
Fundamental analysis
https://en.wikipedia.org/wiki/Fundamental_analysis
Fundamental analysis is a method of evaluating securities by attempting to measure the intrinsic value of a stock. Fundamental analysts study everything from the overall economy and industry conditions to the financial condition and management of companies. Earnings, expenses, assets, and liabilities are all important characteristics to fundamental analysts
Technical analysis
https://en.wikipedia.org/wiki/Technical_analysis
Technical analysis is a trading discipline employed to evaluate investments and identify trading opportunities by analyzing statistical trends gathered from trading activity, such as price movement and volume.
Artificial Intelligence
https://en.wikipedia.org/wiki/Artificial_intelligence
AI has become a catchall term for applications that perform complex tasks that once required human input such as communicating with customers online or playing chess. The term is often used interchangeably with its subfields, which include machine learning and deep learning. There are differences, however. For example, machine learning is focused on building systems that learn or improve their performance based on the data they consume. It’s important to note that although all machine learning is AI, not all AI is machine learning.
AI, or artificial intelligence, refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using it), reasoning (using rules to reach approximate or definite conclusions), and self-correction. AI encompasses a variety of approaches and techniques, including machine learning, natural language processing, computer vision, robotics, and more. Its goal is to create systems that can perform tasks that typically require human intelligence, such as understanding natural language, recognizing patterns, making decisions, and solving problems.
Generative
The term "generative" generally refers to something that has the ability to create or produce, particularly in the context of AI and machine learning.
In the realm of AI, a "generative model" is one that is capable of creating new data that is similar to the data it was trained on. These models learn the underlying patterns and structures of the data and can then generate new samples that resemble the original data. Generative models are often used in tasks such as image generation, text generation, and music generation.
In a broader sense, "generative" can refer to anything that has the capacity to generate or create, whether it's ideas, content, or anything else.
Deep learning
https://en.wikipedia.org/wiki/Deep_learning
(also known as deep structured learning): Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.
https://en.wikipedia.org/wiki/Machine_learning
Machine learning is a branch of artificial intelligence (AI) and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy. Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.
https://en.wikipedia.org/wiki/Law_of_large_numbers
In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed.
The LLN is important because it guarantees stable long-term results for the averages of some random events
https://en.wikipedia.org/wiki/Outlier
A statistical observation that is markedly different in value from the others of the sample Values that are outliers give disproportionate weight to larger over smaller values.
https://en.wikipedia.org/wiki/Arbitrage
The simultaneous buying and selling of securities, currency, or commodities in different markets or in derivative forms in order to take advantage of differing prices for the same asset: "profitable arbitrage opportunities"
Batch Betting is simply the mechanism for placing a lot of wagers in to the wagering platform all at once. If you have to manually key in all the wagers into a wagering queue first, then batch betting is not offering you any benefit with regard to the length of time needed to place your bets.
https://en.wikipedia.org/wiki/Algorithm
An algorithm commonly used nowadays for the set of rules a machine (and especially a computer) follows to achieve a particular goal.
https://en.wikipedia.org/wiki/Kelly_criterion
In probability theory, the Kelly criterion (or Kelly strategy or Kelly bet) is a formula for sizing a bet. The Kelly bet size is found by maximizing the expected value of the logarithm of wealth, which is equivalent to maximizing the expected geometric growth rate. It assumes that the expected returns are known and is optimal for a bettor who values their wealth logarithmically. J. L. Kelly Jr, a researcher at Bell Labs, described the criterion in 1956.[1] Under the stated assumptions, the Kelly criterion leads to higher wealth than any other strategy in the long run (i.e., the theoretical maximum return as the number of bets goes to infinity).
The practical use of the formula has been demonstrated for gambling,[2][3] and the same idea was used to explain diversification in investment management.[4] In the 2000s, Kelly-style analysis became a part of mainstream investment theory[5] and the claim has been made that well-known successful investors including Warren Buffett[6] and Bill Gross[7] use Kelly methods.[8] Also see Intertemporal portfolio choice.
In a study, each participant was given $25 and asked to place even-money bets on a coin that would land heads 60% of the time. Participants had 30 minutes to play, so could place about 300 bets, and the prizes were capped at $250. But the behavior of the test subjects was far from optimal:
Remarkably, 28% of the participants went bust, and the average payout was just $91. Only 21% of the participants reached the maximum. 18 of the 61 participants bet everything on one toss, while two-thirds gambled on tails at some stage in the experiment.[9][10]
Using the Kelly criterion and based on the odds in the experiment (ignoring the cap of $250 and the finite duration of the test), the right approach would be to bet 20% of one's bankroll on each toss of the coin, which works out to a 2.034% average gain each round. This is a geometric mean, not the arithmetic rate of 4% (r=(1+0.2⋅1.0)0.6⋅(1−0.2⋅1.0)0.4). The theoretical expected wealth after 300 rounds works out to $10,505 (=25⋅(1.02034)300) if it were not capped.
In this particular game, because of the cap, a strategy of betting only 12% of the pot on each toss would have even better results (a 95% probability of reaching the cap and an average payout of $242.03).
https://en.wikipedia.org/wiki/Titration
Titration (also known as titrimetry[1] and volumetric analysis) is a common laboratory method of quantitative chemical analysis to determine the concentration of an identified analyte (a substance to be analyzed). A reagent, termed the titrant or titrator,[2] is prepared as a standard solution of known concentration and volume. The titrant reacts with a solution of analyte (which may also be termed the titrand[3]) to determine the analyte's concentration. The volume of titrant that reacted with the analyte is termed the titration volume.
https://en.wikipedia.org/wiki/Regression_analysis
In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one or more independent variables (often called 'predictors', 'covariates', 'explanatory variables' or 'features'). The most common form of regression analysis is linear regression, in which one finds the line (or a more complex linear combination) that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line (or hyperplane) that minimizes the sum of squared differences between the true data and that line (or hyperplane). For specific mathematical reasons (see linear regression), this allows the researcher to estimate the conditional expectation (or population average value) of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters (e.g., quantile regression or Necessary Condition Analysis[1]) or estimate the conditional expectation across a broader collection of non-linear models (e.g., nonparametric regression).
Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Second, in some situations regression analysis can be used to infer causal relationships between the independent and dependent variables. Importantly, regressions by themselves only reveal relationships between a dependent variable and a collection of independent variables in a fixed dataset. To use regressions for prediction or to infer causal relationships, respectively, a researcher must carefully justify why existing relationships have predictive power for a new context or why a relationship between two variables has a causal interpretation. The latter is especially important when researchers hope to estimate causal relationships using observational data.[2][3]
https://en.wikipedia.org/wiki/Data_mining
Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.[1] Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use.[1][2][3][4] Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD.[5] Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.[1]
Slucifer powered by NDGG 1-800-594-9928
Copyright © 2024 Slucifer - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.