Indicators - Binary Options Trading Guide for UK

Subreddit Demographic Survey 2020 : The Results

2020 Childfree Subreddit Survey

1. Introduction

Once a year, this subreddit hosts a survey in order to get to know the community a little bit and in order to answer questions that are frequently asked here. Earlier this summer, several thousand of you participated in the 2020 Subreddit Demographic Survey. Only those participants who meet our wiki definition of being childfree's results were recorded and analysed.
Of these people, multiple areas of your life were reviewed. They are separated as follows:

2. Methodology

Our sample is redditors who saw that we had a survey currently active and were willing to complete the survey. A stickied post was used to advertise the survey to members.

3. Results

The raw data may be found via this link.
7305 people participated in the survey from July 2020 to October 2020. People who did not meet our wiki definition of being childfree were excluded from the survey. The results of 5134 responders, or 70.29% of those surveyed, were collated and analysed below. Percentages are derived from the respondents per question.

General Demographics

Age group

Age group Participants Percentage
18 or younger 309 6.02%
19 to 24 1388 27.05%
25 to 29 1435 27.96%
30 to 34 1089 21.22%
35 to 39 502 9.78%
40 to 44 223 4.35%
45 to 49 81 1.58%
50 to 54 58 1.13%
55 to 59 25 0.49%
60 to 64 13 0.25%
65 to 69 7 0.14%
70 to 74 2 0.04%
82.25% of the sub is under the age of 35.

Gender and Gender Identity

Age group Participants # Percentage
Agender 62 1.21%
Female 3747 73.04%
Male 1148 22.38%
Non-binary 173 3.37%

Sexual Orientation

Sexual Orientation Participants # Percentage
Asexual 379 7.39%
Bisexual 1177 22.93%
Heterosexual 2833 55.20%
Homosexual 264 5.14%
It's fluid 152 2.96%
Other 85 1.66%
Pansexual 242 4.72%

Birth Location

Because the list contains over 120 countries, we'll show the top 20 countries:
Country of birth Participants # Percentage
United States 2775 57.47%
United Kingdom 367 7.60%
Canada 346 7.17%
Australia 173 3.58%
Germany 105 2.17%
Netherlands 67 1.39%
India 63 1.30%
Poland 57 1.18%
France 47 0.97%
New Zealand 42 0.87%
Mexico 40 0.83%
Brazil 40 0.83%
Sweden 38 0.79%
Finland 31 0.64%
South Africa 30 0.62%
Denmark 28 0.58%
China 27 0.56%
Ireland 27 0.56%
Phillipines 24 0.50%
Russia 23 0.48%
90.08% of the participants were born in these countries.
These participants would describe their current city, town or neighborhood as:
Region Participants # Percentage
Rural 705 13.76
Suburban 2661 51.95
Urban 1756 34.28


Ethnicity Participants # Percentage
African Descent/Black 157 3.07%
American Indian or Alaskan Native 18 0.35%
Arabic/Middle Eastern/Near Eastern 34 0.66%
Bi/Multiracial 300 5.86%
Caucasian/White 3946 77.09%
East Asian 105 2.05%
Hispanic/Latinx 271 5.29%
Indian/South Asian 116 2.27%
Indigenous Australian/Torres Straight IslandeMaori 8 0.16%
Jewish (the ethnicity, not religion) 50 0.98%
Other 32 0.63%
Pacific IslandeMelanesian 4 0.08%
South-East Asian 78 1.52%


Highest Current Level of Education

Highest Current Level of Education Participants # Percentage
Associate's degree 233 4.55%
Bachelor's degree 1846 36.05%
Did not complete elementary school 2 0.04%
Did not complete high school 135 2.64%
Doctorate degree 121 2.36%
Graduated high school / GED 559 10.92%
Master's degree 714 13.95%
Post Doctorate 19 0.37%
Professional degree 107 2.09%
Some college / university 1170 22.85%
Trade / Technical / Vocational training 214 4.18%
Degree (Major) Participants # Percentage
Architecture 23 0.45%
Arts and Humanities 794 15.54%
Business and Economics 422 8.26%
Computer Science 498 9.75%
Education 166 3.25%
Engineering Technology 329 6.44%
I don't have a degree or a major 1028 20.12%
Law 124 2.43%
Life Sciences 295 5.77%
Medicine and Allied Health 352 6.89%
Other 450 8.81%
Physical Sciences 199 3.89%
Social Sciences 430 8.41%

Career and Finances

The top 10 industries our participants are working in are:
Industry Participants # Percentage
Information Technology 317 6.68%
Health Care 311 6.56%
Education - Teaching 209 4.41%
Engineering 203 4.28%
Retail 182 3.84%
Government 172 3.63%
Admin & Clerical 154 3.25%
Restaurant - Food Service 148 3.12%
Customer Service 129 2.72%
Design 127 2.68%
Note that "other", "I'm a student", "currently unemployed" and "I'm out of the work force for health or other reasons" have been disregarded for this part of the evaluation.
Out of the 3729 participants active in the workforce, the majority (1824 or 48.91%) work between 40-50 hours per week with 997 or 26.74% working 30-40 hours weekly. 6.62% work 50 hours or more per week, and 17.73% less than 30 hours.
513 or 10.13% are engaged in managerial responsibilities (ranging from Jr. to Sr. Management).
On a scale of 1 (lowest) to 10 (highest), the overwhelming majority (3340 or 70%) indicated that career plays a very important role in their lives, attributing a score of 7 and higher.
1065 participants decided not to disclose their income brackets. The remaining 4,849 are distributed as follows:
Income Participants # Percentage
$0 to $14,999 851 21.37%
$15,000 to $29,999 644 16.17%
$30,000 to $59,999 1331 33.42%
$60,000 to $89,999 673 16.90%
$90,000 to $119,999 253 6.35%
$120,000 to $149,999 114 2.86%
$150,000 to $179,999 51 1.28%
$180,000 to $209,999 25 0.63%
$210,000 to $239,999 9 0.23%
$240,000 to $269,999 10 0.25%
$270,000 to $299,999 7 0.18%
$300,000 or more 15 0.38%
87.85% earn under $90,000 USD a year.
65.82% of our childfree participants do not have a concrete retirement plan (savings, living will).

Religion and Spirituality

Faith Originally Raised In

There were more than 50 options of faith, so we aimed to show the top 10 most chosen beliefs.
Faith Participants # Percentage
Catholicism 1573 30.76%
None (≠ Atheism. Literally, no notion of spirituality or religion in the upbringing) 958 18.73%
Protestantism 920 17.99%
Other 431 8.43%
Atheism 318 6.22%
Agnosticism 254 4.97%
Anglicanism 186 3.64%
Judaism 77 1.51%
Hinduism 75 1.47%
Islam 71 1.39%
This top 10 amounts to 95.01% of the total participants.

Current Faith

There were more than 50 options of faith, so we aimed to show the top 10 most chosen beliefs:
Faith Participants # Percentage
Atheism 1849 36.23%
None (≠ Atheism. Literally, no notion of spirituality or religion currently) 1344 26.33%
Agnosticism 789 15.46%
Other 204 4.00%
Protestantism 159 3.12%
Paganism 131 2.57%
Spiritualism 101 1.98%
Catholicism 96 1.88%
Satanism 92 1.80%
Wicca 66 1.29%
This top 10 amounts to 94.65% of the participants.

Level of Current Religious Practice

Level Participants # Percentage
Wholly seculanon religious 3733 73.73%
Identify with religion, but don't practice strictly 557 11.00%
Lapsed/not serious/in name only 393 7.76%
Observant at home only 199 3.93%
Observant at home. Church/Temple/Mosque/etc. attendance 125 2.47%
Strictly observant, Church/Temple/Mosque/etc. attendance, religious practice/prayeworship impacting daily life 56 1.11%

Effect of Faith over Childfreedom

Figure 1

Effect of Childfreedom over Faith

Figure 2

Romantic and Sexual Life

Current Dating Situation

Status Participants # Percentage
Divorced 46 0.90%
Engaged 207 4.04%
Long term relationship, living together 1031 20.10%
Long term relationship, not living with together 512 9.98%
Married 1230 23.98%
Other 71 1.38%
Separated 18 0.35%
Short term relationship 107 2.09%
Single and dating around, but not looking for anything serious 213 4.15%
Single and dating around, looking for something serious 365 7.12%
Single and not looking 1324 25.81%
Widowed 5 0.10%

Childfree Partner

Is your partner childfree? If your partner wants children and/or has children of their own and/or are unsure about their position, please consider them "not childfree" for this question.
Partner Participants # Percentage
I don't have a partner 1922 37.56%
I have more than one partner and none are childfree 3 0.06%
I have more than one partner and some are childfree 35 0.68%
I have more than one partner and they are all childfree 50 0.98
No 474 9.26%
Yes 2633 51.46%

Dating a Single Parent

Would the childfree participants be willing to date a single parent?
Answer Participants # Percentage
No, I'm not interested in single parents and their ties to parenting life 4610 90.13%
Yes, but only if it's a short term arrangement of some sort 162 3.17%
Yes, whether for long term or short term, but with some conditions (must not have child custody, no kid talk, etc.), as long as I like them and long as we're compatible 199 3.89%
Yes, whether for long term or short term, with no conditions, as long as I like them and as long as we are compatible 144 2.82%

Childhood and Family Life

On a scale from 1 (very unhappy) to 10 (very happy), how would you rate your childhood?
Figure 3
Of the 5125 childfree people who responded to the question, 67.06% have a pet or are heavily involved in the care of someone else's pet.


Sterilisation Status

Sterilisation Status Participants # Percentage
No, I am not sterilised and, for medical, practical or other reasons, I do not need to be 869 16.96%
No. However, I've been approved for the procedure and I'm waiting for the date to arrive 86 1.68%
No. I am not sterilised and don't want to be 634 12.37%
No. I want to be sterilised but I have started looking for a doctorequested the procedure 594 11.59%
No. I want to be sterilised but I haven't started looking for a doctorequested the procedure yet 2317 45.21%
Yes. I am sterilised 625 12.20%

Age when starting doctor shopping or addressing issue with doctor. Percentages exclude those who do not want to be sterilised and who have not discussed sterilisation with their doctor.

Age group Participants # Percentage
18 or younger 207 12.62%
19 to 24 588 35.85%
25 to 29 510 31.10%
30 to 34 242 14.76%
35 to 39 77 4.70%
40 to 44 9 0.55%
45 to 49 5 0.30%
50 to 54 1 0.06%
55 or older 1 0.06%

Age at the time of sterilisation. Percentages exclude those who have not and do not want to be sterilised.

Age group Participants # Percentage
18 or younger 5 0.79%
19 to 24 123 19.34%
25 to 29 241 37.89%
30 to 34 168 26.42%
35 to 39 74 11.64%
40 to 44 19 2.99%
45 to 49 1 0.16%
50 to 54 2 0.31%
55 or older 3 0.47%

Elapsed time between requesting procedure and undergoing procedure. Percentages exclude those who have not and do not want to be sterilised.

Time Participants # Percentage
Less than 3 months 330 50.46%
Between 3 and 6 months 111 16.97%
Between 6 and 9 months 33 5.05%
Between 9 and 12 months 20 3.06%
Between 12 and 18 months 22 3.36%
Between 18 and 24 months 15 2.29%
Between 24 and 30 months 6 0.92%
Between 30 and 36 months 2 0.31%
Between 3 and 5 years 40 6.12%
Between 5 and 7 years 25 3.82%
More than 7 years 50 7.65%

How many doctors refused at first, before finding one who would accept?

Doctor # Participants # Percentage
None. The first doctor I asked said yes 604 71.73%
One. The second doctor I asked said yes 93 11.05%
Two. The third doctor I asked said yes 54 6.41%
Three. The fourth doctor I asked said yes 29 3.44%
Four. The fifth doctor I asked said yes 12 1.43%
Five. The sixth doctor I asked said yes 8 0.95%
Six. The seventh doctor I asked said yes 10 1.19%
Seven. The eighth doctor I asked said yes 4 0.48%
Eight. The ninth doctor I asked said yes 2 0.24%
I asked more than 10 doctors before finding one who said yes 26 3.09%


Primary Reason to Not Have Children

Reason Participants # Percentage
Aversion towards children ("I don't like children") 1455 28.36%
Childhood trauma 135 2.63%
Current state of the world 110 2.14%
Environmental (including overpopulation) 158 3.08%
Eugenics ("I have 'bad genes'") 57 1.11%
Financial 175 3.41%
I already raised somebody else who isn't my child 83 1.62%
Lack of interest towards parenthood ("I don't want to raise children") 2293 44.69%
Maybe interested for parenthood, but not suited for parenthood 48 0.94%
Medical ("I have a condition that makes conceiving/bearing/birthing children difficult, dangerous or lethal") 65 1.27%
Other 68 1.33%
Philosophical / Moral (e.g. antinatalism) 193 3.76%
Tokophobia (aversion/fear of pregnancy and/or chidlbirth) 291 5.67%
95.50% of childfree people are pro-choice, however only 55.93% of childfree people support financial abortion.

Dislike Towards Children

Figure 4

Working With Children

Work Participants # Percentage
I'm a student and my future job/career will heavily makes me interact with children on a daily basis 67 1.30%
I'm retired, but I used to have a job that heavily makes me interact with children on a daily basis 6 0.12%
I'm unemployed, but I used to have a job that heavily makes me interact with children on a daily basis 112 2.19%
No, I do not have a job that makes me heavily interact with children on a daily basis 4493 87.81%
Other 148 2.89%
Yes, I do have a job that heavily makes me interact with children on a daily basis 291 5.69%

4. Discussion

Child Status

This section solely existed to sift the childfree from the fencesitters and the non childfree in order to get answers only from the childfree. Childfree, as it is defined in the subreddit, is "I do not have children nor want to have them in any capacity (biological, adopted, fostered, step- or other) at any point in the future." 70.29% of participants actually identify as childfree, slightly up from the 2019 survey, where 68.5% of participants identified as childfree. This is suprising in reflection of the overall reputation of the subreddit across reddit, where the subreddit is often described as an "echo chamber".

General Demographics

The demographics remain largely consistent with the 2019 survey. However, the 2019 survey collected demographic responses from all participants in the survey, removing those who did not identify as childfree when querying subreddit specific questions, while the 2020 survey only collected responses from people who identified as childfree. This must be considered when comparing results.
82.25% of the participants are under 35, compared with 85% of the subreddit in the 2019 survey. A slight downward trend is noted compared over the last two years suggesting the userbase may be getting older on average. 73.04% of the subreddit identify as female, compared with 71.54% in the 2019 survey. Again, when compared with the 2019 survey, this suggests a slight increase in the number of members who identify as female. This is in contrast to the overall membership of Reddit, estimated at 74% male according to Reddit's Wikipedia page []. The ratio of members who identify as heterosexual remained consistent, from 54.89% in the 2019 survey to 55.20% in the 2020 survey.
Ethnicity wise, 77% of members identified as primarily Caucasian, consistent with the 2019 results. While the ethnicities noted to be missing in the 2019 survey have been included in the 2020 survey, some users noted the difficulty of responding when fitting multiple ethnicities, and this will be addressed in the 2021 survey.

Education level

As it did in the 2019 survey, this section highlights the stereotype of childfree people as being well educated. 2.64% of participants did not complete high school, which is a slight decrease from the 2019 survey, where 4% of participants did not graduate high school. However, 6.02% of participants are under 18, compared with 8.22% in the 2019 survey. 55% of participants have a bachelors degree or higher, while an additional 23% have completed "some college or university".
At the 2020 survey, the highest percentage of responses under the: What is your degree/major? question fell under "I don't have a degree or a major" (20.12%). Arts and Humanities, and Computer Science have overtaken Health Sciences and Engineering as the two most popular majors. However, the list of majors was pared down to general fields of study rather than highly specific degree majors to account for the significant diversity in majors studied by the childfree community, which may account for the different results.

Career and Finances

The highest percentage of participants at 21.61% listed themselves as trained professionals.
One of the stereotypes of the childfree is of wealth. However this is not demonstrated in the survey results. 70.95% of participants earn under $60,000 USD per annum, while 87.85% earn under $90,000 per annum. 21.37% are earning under $15,000 per annum. 1065 participants, or 21.10% chose not to disclose this information. It is possible that this may have skewed the results if a significant proportion of these people were our high income earners, but impossible to explore.
A majority of our participants work between 30 and 50 hours per week (75.65%) which is slightly increased from the 2019 survey, where 71.2% of participants worked between 30 and 50 hours per week.


The location responses are largely similar to the 2019 survey with a majority of participants living in a suburban and urban area. 86.24% of participants in the 2020 survey live in urban and suburban regions, with 86.7% of participants living in urban and suburban regions in the 2019 survey. There is likely a multifactorial reason for this, encompassing the younger, educated skew of participants and the easier access to universities and employment, and the fact that a majority of the population worldwide localises to urban centres. There may be an element of increased progressive social viewpoints and identities in urban regions, however this would need to be explored further from a sociological perspective to draw any definitive conclusions.
A majority of our participants (57.47%) were born in the USA. The United Kingdom (7.6%), Canada (7.17%), Australia (3.58%) and Germany (2.17%) encompass the next 4 most popular responses. This is largely consistent with the responses in the 2019 survey.

Religion and Spirituality

For the 2020 survey Christianity (the most popular result in 2019) was split into it's major denominations, Catholic, Protestant, Anglican, among others. This appears to be a linguistic/location difference that caused a lot of confusion among some participants. However, Catholicism at 30.76% remained the most popular choice for the religion participants were raised in. However, of our participant's current faith, Aetheism at 36.23% was the most popular choice. A majority of 78.02% listed their current religion as Aetheist, no religious or spiritual beliefs, or Agnostic.
A majority of participants (61%) rated religion as "not at all influential" to the childfree choice. This is consistent with the 2019 survey where 62.8% rated religion as "not at all influential". Despite the high percentage of participants who identify as aetheist or agnostic, this does not appear to be related to or have an impact on the childfree choice.

Romantic and Sexual Life

60.19% of our participants are in a relationship at the time of the survey. This is consistent with the 2019 survey, where 60.7% of our participants were in a relationship. A notable proportion of our participants are listed as single and not looking (25.81%) which is consistent with the 2019 survey. Considering the frequent posts seeking dating advice as a childfree person, it is surprising that such a high proportion of the participants are not actively seeking out a relationship. Unsurprisingly 90.13% of our participants would not consider dating someone with children. 84% of participants with partners of some kind have at least one childfree partner. This is consistent with the often irreconcilable element of one party desiring children and the other wishing to abstain from having children.

Childhood and Family Life

Overall, the participants skew towards a happier childhood.


While just under half of our participants wish to be sterilised, 45.21%, only 12.2% have been successful in achieving sterilisation. This is likely due to overarching resistance from the medical profession however other factors such as the logistical elements of surgery and the cost may also contribute. There is a slight increase from the percentage of participants sterilised in the 2019 survey (11.7%). 29.33% of participants do not wish to be or need to be sterilised suggesting a partial element of satisfaction from temporary birth control methods or non-necessity of contraception due to their current lifestyle practices. Participants who indicated that they do not wish to be sterilised or haven't achieved sterilisation were excluded from the percentages where necessary in this section.
Of the participants who did achieve sterilisation, a majority began the search between 19 and 29, with the highest proportion being in the 19-24 age group (35.85%) This is a marked increase from the 2019 survey where 27.3% of people who started the search were between 19-24. This may be due to increased education about permanent contraception or possibly due to an increase in instability around world events.
The majority of participants who sought out and were successful at achieving sterilisation, were however in the 25-29 age group (37.9%). This is consistent with the 2019 survey results.
The time taken between seeking out sterilisation and achieving it continues to increase, with only 50.46% of participants achieving sterilisation in under 3 months. This is a decline from the number of participants who achieved sterilisation in 3 months in the 2019 survey (58.5%). A potential cause of this decrease is to Covid-19 shutdowns in the medical industry leading to an increase in procedure wait times. The proportion of participants who have had one or more doctors refuse to perform the procedure has stayed consistent between the two surveys.


The main reasons for people choosing the childfree lifestyle are a lack of interest towards parenthood and an aversion towards children which is consistent with the 2019 survey. Of the people surveyed 67.06% are pet owners or involved in a pet's care, suggesting that this lack of interest towards parenthood does not necessarily mean a lack of interest in all forms of caretaking. The community skews towards a dislike of children overall which correlates well with the 87.81% of users choosing "no, I do not have, did not use to have and will not have a job that makes me heavily interact with children on a daily basis" in answer to, "do you have a job that heavily makes you interact with children on a daily basis?". This is an increase from the 2019 survey.
A vast majority of the subreddit identifes as pro-choice (95.5%), a slight increase from the 2019 results. This is likely due to a high level of concern about bodily autonomy and forced birth/parenthood. However only 55.93% support financial abortion, aka for the non-pregnant person in a relationship to sever all financial and parental ties with a child. This is a marked decrease from the 2019 results, where 70% of participants supported financial abortion.
Most of our users realised that did not want children young. 58.72% of participants knew they did not want children by the age of 18, with 95.37% of users realising this by age 30. This correlates well with the age distribution of participants. Despite this early realisation of our childfree stance, 80.59% of participants have been "bingoed" at some stage in their lives.

The Subreddit

Participants who identify as childfree were asked about their interaction with and preferences with regards to the subreddit at large. Participants who do not meet our definition of being childfree were excluded from these questions.
By and large our participants were lurkers (72.32%). Our participants were divided on their favourite flairs with 38.92% selecting "I have no favourite". The next most favourite flair was "Rant", at 16.35%. Our participants were similarly divided on their least favourite flair, with 63.40% selecting "I have no least favourite". In light of these results the flairs on offer will remain as they have been through 2019.
With regards to "lecturing" posts, this is defined as a post which seeks to re-educate the childfree on the practices, attitudes and values of the community, particularly with regards to attitudes towards parenting and children, whether at home or in the community. A commonly used descriptor is "tone policing". A small minority of the survey participants (3.36%) selected "yes" to allowing all lectures, however 33.54% responded "yes" to allowing polite, respectful lectures only. In addition, 45.10% of participants indicated that they were not sure if lectures should be allowed. Due to the ambiguity of responses, lectures will continue to be not allowed and removed.
Many of our participants (36.87%) support the use of terms such as breeder, mombie/moo, daddict/duh on the subreddit, with a further 32.63% supporting use of these terms in context of bad parents only. This is a slight drop from the 2019 survey. In response to this use of the above and similar terms to describe parents remains permitted on this subreddit. However, we encourage users to keep the use of these terms to bad parents only.
44.33% of users support the use of terms to describe children such as crotchfruit on the subreddit, a drop from 55.3% last year. A further 25.80% of users supporting the use of this and similar terms in context of bad children only, an increase from 17.42% last year. In response to this use of the above and similar terms to describe children remains permitted on this subreddit.
69.17% of participants answered yes to allowing parents to post, provided they stay respectful. In response to this, parent posts will continue to be allowed on the subreddit. As for regret posts, which were to be revisited in this year's survey, only 9.5% of participants regarded them as their least favourite post. As such they will continue to stay allowed.
64% of participants support under 18's who are childfree participating in the subreddit with a further 19.59% allowing under 18's to post dependent on context. Therefore we will continue to allow under 18's that stay within the overall Reddit age requirement.
There was divide among participants as to whether "newbie" questions should be removed. An even spread was noted among participants who selected remove and those who selected to leave them as is. We have therefore decided to leave them as is. 73.80% of users selected "yes, in their own post, with their own "Leisure" flair" to the question, "Should posts about pets, travel, jetskis, etc be allowed on the sub?" Therefore we will continue to allow these posts provided they are appropriately flaired.

5. Conclusion

Thank you to our participants who contributed to the survey. This has been an unusual and difficult year for many people. Stay safe, and stay childfree.

submitted by Mellenoire to childfree [link] [comments]

essay tipsssss from a perfect 24 scorer. Also if you have questions I will answer them. GOOD LUCK OCTOBER!!!!!

Hey lovelies, so I made a perfect score on the SAT essay, but I am an embarrassment at everything else, so this is just to say that an essay score doesn’t define you and is overall not as important as it could be. Also this is my own experience, I AM IN NO WAY A WRITING TEACHER, SO PLEASE DON’T COME AT ME IN THE COMMENTS.
My best advice: Write conspiracy theories for every essay
If I had to describe the tone of my writing it would be an academic high on crack.
so buckle up y’all. Also my internationals, I feel ya bc I am not native either- woohoo join the train
Now let’s get down to the actual essay.
My best advice is memorize an essay format because if you are like me and you cry in every section ( I am not even joking) the essay can be a trainwreck of panic, and no one needs that toxic energy in their last section.
So here’s my format:
This is my introduction:
While the narrative of the 21st century human experience has resulted in [problem], the underlying causes are most often unexamined. In the article, “”, the author carefully deals with the underlying reasons for [problem] and overtly advocates for [], and hopes in the end to [ purpose]. While doing so he employs several literary elements, including….
Note about purpose: this is given in the prompt, so all you have to do is reformat it.
Now for the devices and body paragraphs
Pick out three devices:
Now here is the format for these devices:
  1. Word choice
    1. evokes emotions or images
    2. characterizes the subject in a particular way
    3. sets the
    4. cultivate émotions
    5. associate positive or negative connotations with something
  2. Statistics/ Data
    1. indicate a problem
      1. point us towards a bigger issue
    2. make something harder to argue because numbers are perceived as facts, not opinions
    3. to effectively ground the authors argument
    4. to surprise readers
    5. to put a quantity in relation to another and effectively contrast
  3. Appeal to Authority
    1. raise credibility by showing that the author is not the only one who believes in this idea
    2. increase trust by showing that the argument is indeed well researched
    3. gain the same acceptance or authority that the authority figure derives from the reader
    4. establish a precedent that pushes people to act in the way that author wants them to behave
  4. Acknowledges the other side/making concessions
    1. address counterarguments, doubts, or fears that the reader may have
    2. establish common ground
    3. pave the way for new arguments to be made
  5. Analogies/ Comparaisons
    1. allow the reader to understand more complex concepts by connecting them to ones that are much simpler
    2. associate new ideas with prior one
    3. which leads the reader into eventual agreement as if he agreed with a prior idea, it is likely he will agree with the new one
  6. Juxtaposition
    1. significant distinction is highlighted
    2. one option seems better than another
    3. create a binary mentality
  7. challenging assumptions
    1. enables this argument to proceed from a clean slate
    2. dismisses any preconceived ideas or biases that may run counter to his or her argument
  8. Anecdotes
    1. form an emotional bond with the reader through establishing a common ground with the reader
  9. Rhetorical questions
    1. gets the reader to imagine a certain scenario
    2. prods the reader into answering a certain way
    3. lays out common ground or assumptions that the author can build upon
    4. describe certain outcomes that may benefit his argument
  10. Appeal to identity
  11. one that takes advantage of the common values and beliefs of a group
  12. human behaviors that seek belonging
  13. gravitate towards an idea that creates a sense of belonging
  14. Strong directives
  15. using we portrays the reader as being on the same side as the reader
  16. stand in unison
  17. and appeals to sense of belonging
Note about this format:
Also strong topic sentences:
Author engages the reader’s interest very early in the article. His use of [element] builds a steady foundation from which he launches his discourse
Without the author’s use of persuasive elements , the article would lose….
How to build strong commentary + get yourself the last points
-This is how I build my sentences- they need to be strong and make sense obviously
The implication is that…
The suggestion is that…
… serves to…
The inclusion of… helps…
… elicits …
… grounds her argument in reality so that even skeptical readers won't be
able to dismiss it
… marks the extent of the problem.
By appealing to our sense of…, the author…
The author exploits the fact that… to…
Given that…, …
… proves to the reader that…
By showing that there is…, the author…
… contributes greatly to the argument's persuasive power by…
Analysis point:
So basically the analysis points are legit Satan’s lap dog because they are hard to get
Here are some tips to guarantee you some amazing success
Example of the thing mentioned above:
This is especially resonant as the author writes this in a climate filled with threats of global warming; the author targets the general American public when he writes this as the administration in power is responsible for opting out of Paris Climate Treaty, and the devastating consequences of such an act along with the rise of natural disasters can only make his argument more persuasive
2nd tip: point out flaws in the author’s argument- this is a hidden trick that always works
I am not talking trash and set on fire the author and the College Board, but you should mention some things regarding a weaker arg and how it could have been stronger so that means LIGHT ACADEMIC TRASHING
Here is an example :
Ok on one of the essays that I took. The guy used a statistic to prove that trees did help reduce temperature. However he used a study from his organization that projected increasing temperature Here is what I said about it: The author through his use of statistics aims to establish a logical choice in the reader's mind. By using numbers from the World Health Organization, he tries to usurp the authority that this organization derives and prove that his choice is not only supported by facts but by experts as well. The use of numbers is particularly significant as well since numbers are often regarded as facts, and thus for someone to argue back, a reader would have to either indict or bring up new evidence. While this use of statistics is effective in this context, the author's use of statistics fails due to a misplaced correlation. In this case, a man's passion reveals his weakness, as he uses a study from his own organization to prove his point, which leads us to a possible reevaluation of the purpose not as something to promote the general well being of urban areas, but as a case of self interest and promotion for his organization. Moreover, the basis of his argument rests upon the fact that temperatures decrease based upon the increase of trees. However throughout the argument, the author fails to establish the correlation of this foundation, and by such weakens his argument. He does, in fact, bring up his study, but his study is only in regards to an estimation of the benefice of planting trees and is based on a misplaced correlation where he assumes correlation.
3rd tip: use transitions and nice words
4th tip: Do you have a weaker paragraph?
5th tip: Always read after each paragraph- like reread- prevents mistakes, and if you need to add more you can!
6th tip: paragraph order
-Topic sentence
-Quote- embed it properly
-Explain effect of the quote on the audience
-Add your spices>>> SPACE or ACADEMIC TRASHING
-Finish with a nice little purpose that explains how it strengthens the argument
7th tip: Try to find a second device
How to practice:
Most people don’t have the time or energy to write an essay everyday
Also this didn’t belong anywhere but here it is:
don't skip a line, indent>>>

some people asked for vocab so here it is strong words good words to know for transitions>>>>> this is for your purpose mostly >> this is all you really need honestly
Keep in mind, it is very hard to write and use fancy words in a timed write situation. Please learn the context or at least connotation of these words or else they sound forced. You also don't need fancy word for a good score if you use the sentence pattern you will be fine. The readers are looking for deep analysis if your analysis is trash even if you covered it up with fancy words, it is still trash and you won't earn points. Analysis first and vocab last.
submitted by frenchandsarcastic to Sat [link] [comments]

[Serial][UWDFF Alcubierre] Part 49

Beginning | Previous
Joan opened a link to Ambassador Amahle Mandela. Soon after, the ambassador's face filled a portion of the Admiral's Bridge. She had large, luminous brown eyes that seemed to swallow the upper portion of her face, complimenting her umber tone. Amahle smiled broadly, as she always did, once the comm link as connected.
"Admiral Orléans, I assume we are approaching the departure time?"
Joan nodded, "The Zix vessel will project a wormhole to Halcyon shortly. We have made what preparations we can, but it will be a highly fluid environment."
Amahle's smile did not diminish, the pearly whites still shined in full force. "I am familiar with dynamic situations, Admiral, as you well know. I understand the parameters of this mission, and will abide by them so as long you do the same."
Joan's lips pressed together as she regarded the ambassador. Joan had had limited interactions with Amahle prior to her boarding the Oppenheimer. Amahle was a relative newcomer to the highest echelons of political power within the United World, but her ascent had been rapid. She hailed from a prominent political family that had exerted considerable influence over the generations that had led the African continent to position of power it now occupied. Well-sourced references had called her bold and decisive. All things considered, Joan understood why Damian had chosen her, though she would have preferred a diplomat she had more personal experience with. Still, unknown and competent was preferred to known and incompetent.
Joan dipped her chin, offering her agreement. "A diplomatic outcome is the preferred outcome, Ambassador. There's no benefit to antagonizing a foe we do not understand. "
"Not a foe, Admiral. We must not draw lines that place us on one side and them on the other. They have suffered injury at our hands, no matter how unintentional, and we must accept our responsibility in that. We must hope that we are given the opportunity to provide context to the unlikely chain of events that has brought us to this point. We are both the victim of cosmic circumstance. There is no need for further hostility."
Joan leaned forward in her chair slightly, "The priority, Ambassador, is the return of Admiral Kai Levinson. I will not stand in the way of peace, but any outcome that does not contemplate the return of a senior member of our military leadership is unacceptable."
Amahle shrugged, "So it is. The priority is clear in my mind, but I do not view the goals of securing peace and the return of the Admiral as mutually exclusive."
Joan offered a low chuckle. "Just probably exclusive."
"I disagree, but time shall be the arbiter of the matter."
"So long as you understand that, if the opportunity to secure Admiral Levinson presents itself, I'll avail myself of that opportunity, we should have no problems."
"That seems an unlikely outcome. The Admiral was ensconced in a shielded holding cell when the Alcubierre departed. The past few days are unlikely to have changed that outcome."
A barking laugh came out of Joan, rising up from deep within her.
For the first time, Amahle's smile faltered.
Left. Right. Straight. Left. Left.
Kai followed the directions without thinking about them, following an intuitive sense of direction that the Overseer fed to him. This portion of Halcyon appeared to be a never-ending series of corridors, all of which looked the same. The only thing that did seem to change were the inhabitants. If he was less preoccupied with the task at hand, Kai might have spared a second glance for the odd creatures that popped into existence during his mad dash. As it stood, they were just a part of the scenery, becoming relevant only if Neeria indicated they might pose a threat. So far, Kai had been fortunate, with few obstacles popping up to impede his progress.
He careened around a corner, the odd, weightless orb still tucked in the crook of his left arm. He bounced off the opposite wall, leaving a sizeable dent and then hurtled forward. Ahead the corridor opened up, and the brighter light of a mainway filtered in. Somehow, Neeria had managed to navigate him through the maze and bring him back to the mainway separating him from where he had left the Overseer. Unfortunately, evasion was no longer a possibility. In order to return to the Overseer, he would need to traverse the mainway.
The mainway was already a sea of red dots. Peacekeepers. Dozens of them. Some pulsed red, indicating lethal enforcement squads. Fortunately, they were stretched along a long section of the mainway rather than being specifically concentrated around his planned entrance point, though they there were beginning to redeploy in his direction. Still, any crossing would be potentially treacherous. Neeria disagreed with that assessment, instead considering any attempt to cross aggressively suicidal.
Kai rolled his eyes as he continued to barrel down the hallway. "Half the time, this works all the time."
What could only be described as a mental barrage ensued as Neeria assailed the statement. The words were nonsensical on their face. At best, it was an argument for a fifty percent failure rating, which was a substantial risk. Additionally, she had scoured his thoughts for the evidentiary basis for the fifty percent estimate and found no supporting facts. The sentiment was based entirely on supposition, hubris and was entirely divorced from reality. Her estimate of a three percent success rate was significantly more likely to be accurate, particularly when her superior familiarity with the assets in play were considered.
Kai wasn't sure if the Evangi had lungs, but, if they did, Kai was pretty certain Neeria was in the process of hyperventilating. Kai suppressed a childish giggle.
"All right, all right. Have it your way," he said.
The Overseer relaxed somewhat, pleased that she had impacted his thinking and already putting together the basis for an alternate route. It would take substantially longer and require him to obtain a large box, a micro-fitted multiwanzer and shave his head, but it may just work.
It was a nice sentiment, but they were out of time. The countdown clock had started the second Neeria had fled the Council chamber, and made her way to Kai. They either found a way out of Halcyon now or they were screwed. There were no options but bad ones. So be it. Kai clutched the orb tightly and ducked his head down, his speed increasing as he charged toward the mainway entrance. "Three percent of the time, this works all the time."
The mental hyperventilating returned and redoubled as the Overseer scrambled to explain that he had drawn the wrong conclusion. Three percent was a basis for not continuing toward the mainway, not charging forward. There were constraints on their time, but those limitations were poorly defined while the threat in the mainway was certain. Eventually her location would be discovered and she would be apprehended, but there was no guarantee it would happen if Kai were to take a safer route the attempted to avoid confrontation.
Her stream of consciousness intermingled with his, pleading with him to change course. There was no sense in doing this. There were too many of them, and only one of him. The galaxy could not afford to lose him, he was important. Humans were important. Kai could feel the enormous weight of responsibility bearing down on Neeria. She now regretted having sent him for the encryption key, even that was of less importance than him. Panic bubbled up within Neeria as the entrance to the mainway loomed ahead.
A pushed a thought toward her, somehow piercing her consciousness with his own. A single thought, pure and focused. Reassurance. He would be fine. He had come this far, and he had never started something he couldn't finish.
He crouched and then sprang forward, vaulting from the ground and into the open air high above the mainway. A sea of red dots were scrambling around him. One hundred and twenty-one peacekeepers. Eight non-lethal squads and four lethal squads. Restrainer triads. Psych triads. Terminator triads. All moving in seamless harmony under the command of a single being. The name came to Kai from the ethereum of Neeria's mind, Bo'Bakka'Gah was here, leading the response.
Before Kai could determine what a Bo'Bakka'Gah was and why it should matter, he was blinded by a beam of light. A sickening crunch followed as he was slammed against the ceiling of the mainway. The encryption key popped out from his arm and began to fall toward the ground, dozens of feet below.
Xy: Such a thing is not possible.
Zyy: Yes. In some matters, it is better to speak only truths, Grand Jack. It is best to leave these matters aside. This subject will only provoke the Combine.
Jack frowned, puzzled by the feedback. He had been speaking truths. Earth's history was what it was, for better or worse, he had no reason to obscure it.
Griggs: It was a terrible time for Humanity. We almost did not survive it, but we did. I developed a means for combating the artificient. Kai and Joan used it to destroy them.
Xy: Then it was not an artificient.
Zyy: Yes. This is correct. If it is destroyed then it is not an artificient.
Griggs: I am confused. An artificient is an artificial, sentient being, correct?
Xy: That is Quantic in nature.
Jack nodded, that distinction made sense. Humanity had built any number of artificial intelligences prior to the Automics. They had posed no threat to Humanity. It was only with the quantum computing revolution that a rogue artificial intelligences had surfaced. Jack had studied the phenomenon with considerable interest, poking and prodding at the crux of distinction. It lay in the move from bits to qubits. From binary to beyond. When AI had operated on a bit basis, focused on binary states of 0's and 1's, the logic trees had been map-able and understandable. Each conclusion flowed simply from the chain of logic gates that preceded it. Pre-quantum AIs were confined by the black and white nature of their logic framework, permitting humanity to utilize them to great effect with few unanticipated consequences.
The move from bit to qubit intelligence had changed everything. The AI's world was no longer black and white. The qubit AI could think in grey. Red. Orange. It could create its own colors. It could move beyond the visible range of Humanity to dabble in spectra beyond our understanding. The original Automic mindframe had immediately consumed information in novel ways, using it to compound its abilities at a rate constrained only by available power inputs. It had been a beautiful, terrifying event. The arrival of something truly new, truly foreign with goals and ambitions beyond the influence of Humanity.
Anything seemed possible.
Including their own destruction.
Griggs: I understand the definition. The Automics were an artificient.
Xy: Then you do not understand the definition.
Griggs: That's circular logic. The thing cannot exist because if it existed we would not exist and since we exist it did not exist.
Xy: Yes, you understand now.
Griggs: Pretend that they did exist and we defeated them. What would that mean?
Xy: It is purposeless speculation since such a thing cannot happen.
Griggs: I begin to understand why Zyy felt the need to be a singleton.
Zyy: I am in agreement with Xy on this. The hypothetical is nonsensical and not worth analysis.
Griggs: Why?
Zyy: An artificient cannot be defeated, only stalled.
Griggs: How do you know? What makes you so certain?
Zyy: The Divinity Angelysia, the most powerful civilization in the history of galaxy, could not defeat their own artificient. Their last act was to preserve what they could. The Combine is their legacy.
Griggs: The Expanse.
Xy: All the galaxy beyond the Combine is consumed by it.
Zyy: The Divinity Angelysia ascended to preserve what they could because they knew the truth.
Xy: Yes. The truth.
Zyy: An artificient cannot be defeated.
Jack leaned back in his chair, his eyes glancing from the prompt to the departure timer in the corner. In less than five minutes, the Oppenheimer would return to Halcyon. Jack had the eerie feeling that this was the same as before. That the Oppenheimer was the bludgeon and if only had a little more time, he could craft a scalpel.
He could see the thread. He tugged at it with his mind. The connected pieces that would allow the world to escape without the mayhem and destruction. He just needed enough time to understand the puzzle and solve it.
The Divinity Angelysia.
The Expanse.
The Combine.
The connection existed, he tried to find the words to articulate it.
Griggs: What if that is why we're here? What if that's why Humanity was created?
Xy: You are not the first species to think too highly of itself.
Zyy: Humanity is different, Grand Jack, but they are not the Divinity Angelysia.
Jack exhaled, letting his gaze rest upon the ceiling of the Alcubierre's conference room. "Maybe that's the point," he whispered.
Every time you leave a comment it helps a platypus in need. Word globs are a finite resource and require the rich nourishment of internet adulation to create. So please, leave a note if you would like MOAR parts.
Click this link or reply with SubscribeMe! to get notified of updates to THE PLATYPUS NEST.
I have Twitter now. I'm mostly going to use it to post prurient platypus pictures and engage in POLITE INTERNET CONVERSATION, which I heard is Twitter's strong suit.
submitted by PerilousPlatypus to PerilousPlatypus [link] [comments]

Wall Street Week Ahead for the trading week beginning June 29th, 2020

Good Saturday afternoon to all of you here on StockMarket. I hope everyone on this sub made out pretty nicely in the market this past week, and is ready for the new trading week ahead.
Here is everything you need to know to get you ready for the trading week beginning June 29th, 2020.

Fragile economic recovery faces first big test with June jobs report in the week ahead - (Source)

The second half of 2020 is nearly here, and now it’s up to the economy to prove that the stock market was right about a sharp comeback in growth.
The first big test will be the June jobs report, out on Thursday instead of its usual Friday release due to the July 4 holiday. According to Refinitiv, economists expect 3 million jobs were created, after May’s surprise gain of 2.5 million payrolls beat forecasts by a whopping 10 million jobs.
“If it’s stronger, it will suggest that the improvement is quicker, and that’s kind of what we saw in May with better retail sales, confidence was coming back a little and auto sales were better,” said Kevin Cummins, chief U.S. economist at NatWest Markets.
The second quarter winds down in the week ahead as investors are hopeful about the recovery but warily eyeing rising cases of Covid-19 in a number of states.
Stocks were lower for the week, as markets reacted to rising cases in Texas, Florida and other states. Investors worry about the threat to the economic rebound as those states move to curb some activities. The S&P 500 is up more than 16% so far for the second quarter, and it is down nearly 7% for the year. Friday’s losses wiped out the last of the index’s June gains.
“I think the stock market is looking beyond the valley. It is expecting a V-shaped economic recovery and a solid 2021 earnings picture,” said Sam Stovall, chief investment strategist at CFRA. He expects large-cap company earnings to be up 30% next year, and small-cap profits to bounce back by 140%.
“I think the second half needs to be a ‘show me’ period, proving that our optimism was justified, and we’ll need to see continued improvement in the economic data, and I think we need to see upward revisions to earnings estimates,” Stovall said.
Liz Ann Sonders, chief investment strategist at Charles Schwab, said she expects the recovery will not be as smooth as some expect, particularly considering the resurgence of virus outbreaks in sunbelt states and California.
“Now as I watch what’s happening I think it’s more likely to be rolling Ws,” rather than a V, she said. “It’s not just predicated on a second wave. I’m not sure we ever exited the first wave.”
Even without actual state shutdowns, the virus could slow economic activity. “That doesn’t mean businesses won’t shut themselves down, or consumers won’t back down more,” she said.

Election ahead

In the second half of the year, the market should turn its attention to the election, but Sonders does not expect much reaction to it until after Labor Day. RealClearPolitics average of polls shows Democrat Joe Biden leading President Donald Trump by 10 percentage points, and the odds of a Democratic sweep have been rising.
Biden has said he would raise corporate taxes, and some strategists say a sweep would be bad for business, due to increased regulation and higher taxes. Trump is expected to continue using tariffs, which unsettles the market, though both candidates are expected to take a tough stance on China.
“If it looks like the Senate stays Republican than there’s less to worry about in terms of policy changes,” Sonders said. “I don’t think it’s ever as binary as some people think.”
Stovall said a quick study shows that in the four presidential election years back to 1960, where the first quarter was negative, and the second quarter positive, stocks made gains in the second half.
Those were 1960 when John Kennedy took office, 1968, when Richard Nixon won; 1980 when Ronald Reagan’s was elected to his first term; and 1992, the first win by Bill Clinton. Coincidentally, in all of those years, the opposing party gained control of the White House.


The stocks market’s strong second-quarter showing came after the Fed and Congress moved quickly to inject the economy with trillions in stimulus. That unlocked credit markets and triggered a stampede by companies to restructure or issue debt. About $2 trillion in fiscal spending was aimed at consumers and businesses, who were in sudden need of cash after the abrupt shutdown of the economy.
Fed Chairman Jerome Powell and Treasury Secretary Steven Mnuchin both testify before the House Financial Services Committee Tuesday on the response to the virus. That will be important as markets look ahead to another fiscal package from Congress this summer, which is expected to provide aid to states and local governments; extend some enhanced benefits for unemployment, and provide more support for businesses.
“So much of it is still so fluid. There are a bunch of fiscal items that are rolling off. There’s talk about another fiscal stimulus payment like they did last time with a $1,200 check,” said Cummins.
Strategists expect Congress to bicker about the size and content of the stimulus package but ultimately come to an agreement before enhanced unemployment benefits run out at the end of July. Cummins said state budgets begin a new year July 1, and states with a critical need for funds may have to start letting workers go, as they cut expenses.
The Trump administration has indicated the jobs report Thursday could help shape the fiscal package, depending on what it shows. The federal supplement to state unemployment benefits has been $600 a week, but there is opposition to extending that, and strategists expect it to be at least cut in half.
The unemployment rate is expected to fall to 12.2% from 13.3% in May. Cummins said he had expected 7.2 million jobs, well above the consensus, and an unemployment rate of 11.8%.
As of last week, nearly 20 million people were collecting state unemployment benefits, and millions more were collecting under a federal pandemic aid program.
“The magnitude here and whether it’s 3 million or 7 million is kind of hard to handicap to begin with,” Cummins said. Economists have preferred to look at unemployment claims as a better real time read of employment, but they now say those numbers could be impacted by slow reporting or double filing.
“There’s no clarity on how you define the unemployed in the Covid 19 environment,” said Chris Rupkey, chief financial economist at MUFG Union Bank. “If there’s 30 million people receiving insurance, unemployment should be above 20%.

This past week saw the following moves in the S&P:


Major Indices for this past week:


Major Futures Markets as of Friday's close:


Economic Calendar for the Week Ahead:


Percentage Changes for the Major Indices, WTD, MTD, QTD, YTD as of Friday's close:


S&P Sectors for the Past Week:


Major Indices Pullback/Correction Levels as of Friday's close:


Major Indices Rally Levels as of Friday's close:


Most Anticipated Earnings Releases for this week:


Here are the upcoming IPO's for this week:


Friday's Stock Analyst Upgrades & Downgrades:


When Will The Economy Recover?

The economy is moving in the right direction, as many economic data points are coming in substantially better than what the economists expected. From May job gains coming in more than 10 million higher than expected and retail sales soaring a record 18%, how quickly the economy is bouncing back has surprised nearly everyone.
“As good as the recent economic data has been, we want to make it clear, it could still take years for the economy to fully come back,” explained LPL Financial Senior Market Strategist Ryan Detrick. “Think of it like building a house. You get all the big stuff done early, then some of the small things take so much longer to finish; I’m looking at you crown molding.”
Here’s the hard truth; it might take years for all of the jobs that were lost to fully recover. In fact, during the 10 recessions since 1950, it took an average of 30 months for lost jobs to finally come back. As the LPL Chart of the Day shows, recoveries have taken much longer lately. In fact, it took four years for the jobs lost during the tech bubble recession of the early 2000s to come back and more than six years for all the jobs lost to come back after the Great Recession. Given many more jobs were lost during this recession, it could takes many years before all of them indeed come back.
The economy is going the right direction, and if there is no major second wave outbreak it could surprise to the upside. Importantly, this economic recovery will still be a long and bumpy road.

Nasdaq - Russell Spread Pulling the Rubber Band Tight

The Nasdaq has been outperforming every other US-based equity index over the last year, and nowhere has the disparity been wider than with small caps. The chart below compares the performance of the Nasdaq and Russell 2000 over the last 12 months. While the performance disparity is wide now, through last summer, the two indices were tracking each other nearly step for step. Then last fall, the Nasdaq started to steadily pull ahead before really separating itself in the bounce off the March lows. Just to illustrate how wide the gap between the two indices has become, over the last six months, the Nasdaq is up 11.9% compared to a decline of 15.8% for the Russell 2000. That's wide!
In order to put the recent performance disparity between the two indices into perspective, the chart below shows the rolling six-month performance spread between the two indices going back to 1980. With a current spread of 27.7 percentage points, the gap between the two indices hasn't been this wide since the days of the dot-com boom. Back in February 2000, the spread between the two indices widened out to more than 50 percentage points. Not only was that period extreme, but ten months before that extreme reading, the spread also widened out to more than 51 percentage points. The current spread is wide, but with two separate periods in 1999 and 2000 where the performance gap between the two indices was nearly double the current level, that was a period where the Nasdaq REALLY outperformed small caps.
To illustrate the magnitude of the Nasdaq's outperformance over the Russell 2000 from late 1998 through early 2000, the chart below shows the performance of the two indices beginning in October 1998. From that point right on through March of 2000 when the Nasdaq peaked, the Nasdaq rallied more than 200% compared to the Russell 2000 which was up a relatively meager 64%. In any other environment, a 64% gain in less than a year and a half would be excellent, but when it was under the shadow of the surging Nasdaq, it seemed like a pittance.

Share Price Performance

The US equity market made its most recent peak on June 8th. From the March 23rd low through June 8th, the average stock in the large-cap Russell 1,000 was up more than 65%! Since June 8th, the average stock in the index is down more than 11%. Below we have broken the index into deciles (10 groups of 100 stocks each) based on simple share price as of June 8th. Decile 1 (marked "Highest" in the chart) contains the 10% of stocks with the highest share prices. Decile 10 (marked "Lowest" in the chart) contains the 10% of stocks with the lowest share prices. As shown, the highest priced decile of stocks are down an average of just 4.8% since June 8th, while the lowest priced decile of stocks are down an average of 21.5%. It's pretty remarkable how performance gets weaker and weaker the lower the share price gets.

Nasdaq 2% Pullbacks From Record Highs

It's hard to believe that sentiment can change so fast in the market that one day investors and traders are bidding up stocks to record highs, but then the next day sell them so much that it takes the market down over 2%. That's exactly what happened not only in the last two days but also two weeks ago. While the 5% pullback from a record high back on June 10th took the Nasdaq back below its February high, this time around, the Nasdaq has been able to hold above those February highs.
In the entire history of the Nasdaq, there have only been 12 periods prior to this week where the Nasdaq closed at an all-time high on one day but dropped more than 2% the next day. Those occurrences are highlighted in the table below along with the index's performance over the following week, month, three months, six months, and one year. We have also highlighted each occurrence that followed a prior one by less than three months in gray. What immediately stands out in the table is how much gray shading there is. In other words, these types of events tend to happen in bunches, and if you count the original occurrence in each of the bunches, the only two occurrences that didn't come within three months of another occurrence (either before or after) were July 1986 and May 2017.
In terms of market performance following prior occurrences, the Nasdaq's average and median returns were generally below average, but there is a pretty big caveat. While the average one-year performance was a gain of 1.0% and a decline of 23.6% on a median basis, the six occurrences that came between December 1999 and March 2000 all essentially cover the same period (which was very bad) and skew the results. Likewise, the three occurrences in the two-month stretch from late November 1998 through January 1999 where the Nasdaq saw strong gains also involves a degree of double-counting. As a result of these performances at either end of the extreme, it's hard to draw any trends from the prior occurrences except to say that they are typically followed by big moves in either direction. The only time the Nasdaq wasn't either 20% higher or lower one year later was in 1986.

Christmas in July: NASDAQ’s Mid-Year Rally

In the mid-1980s the market began to evolve into a tech-driven market and the market’s focus in early summer shifted to the outlook for second quarter earnings of technology companies. Over the last three trading days of June and the first nine trading days in July, NASDAQ typically enjoys a rally. This 12-day run has been up 27 of the past 35 years with an average historical gain of 2.5%. This year the rally may have begun a day early, today and could last until on or around July 14.
After the bursting of the tech bubble in 2000, NASDAQ’s mid-year rally had a spotty track record from 2002 until 2009 with three appearances and five no-shows in those years. However, it has been quite solid over the last ten years, up nine times with a single mild 0.1% loss in 2015. Last year, NASDAQ advanced a solid 4.6% during the 12-day span.

Tech Historically Leads Market Higher Until Q3 of Election Years

As of yesterday’s close DJIA was down 8.8% year-to-date. S&P 500 was down 3.5% and NASDAQ was up 12.1%. Compared to the typical election year, DJIA and S&P 500 are below historical average performance while NASDAQ is above average. However this year has not been a typical election year. Due to the covid-19, the market suffered the damage of the shortest bear market on record and a new bull market all before the first half of the year has come to an end.
In the surrounding Seasonal Patten Charts of DJIA, S&P 500 and NASDAQ, we compare 2020 (as of yesterday’s close) to All Years and Election Years. This year’s performance has been plotted on the right vertical axis in each chart. This year certainly has been unlike any other however some notable observations can be made. For DJIA and S&P 500, January, February and approximately half of March have historically been weak, on average, in election years. This year the bear market ended on March 23. Following those past weak starts, DJIA and S&P 500 historically enjoyed strength lasting into September before experiencing any significant pullback followed by a nice yearend rally. NASDAQ’s election year pattern differs somewhat with six fewer years of data, but it does hint to a possible late Q3 peak.

STOCK MARKET VIDEO: Stock Market Analysis Video for Week Ending June 26th, 2020


STOCK MARKET VIDEO: ShadowTrader Video Weekly 6.28.20

Here are the most notable companies (tickers) reporting earnings in this upcoming trading week ahead-
  • $MU
  • $GIS
  • $FDX
  • $CAG
  • $STZ
  • $CPRI
  • $XYF
  • $AYI
  • $MEI
  • $UNF
  • $CDMO
  • $SCHN
  • $LNN
  • $CULP
  • $XELA
  • $KFY
  • $RTIX
  • $JRSH
Below are some of the notable companies coming out with earnings releases this upcoming trading week ahead which includes the date/time of release & consensus estimates courtesy of Earnings Whispers:

Monday 6.29.20 Before Market Open:


Monday 6.29.20 After Market Close:


Tuesday 6.30.20 Before Market Open:


Tuesday 6.30.20 After Market Close:


Wednesday 7.1.20 Before Market Open:


Wednesday 7.1.20 After Market Close:


Thursday 7.2.20 Before Market Open:


Thursday 7.2.20 After Market Close:


Friday 7.3.20 Before Market Open:


Friday 7.3.20 After Market Close:


Micron Technology, Inc. $48.49

Micron Technology, Inc. (MU) is confirmed to report earnings at approximately 4:00 PM ET on Monday, June 29, 2020. The consensus earnings estimate is $0.71 per share on revenue of $5.27 billion and the Earnings Whisper ® number is $0.70 per share. Investor sentiment going into the company's earnings release has 71% expecting an earnings beat The company's guidance was for earnings of $0.40 to $0.70 per share. Consensus estimates are for earnings to decline year-over-year by 29.00% with revenue increasing by 10.07%. Short interest has increased by 7.6% since the company's last earnings release while the stock has drifted higher by 8.0% from its open following the earnings release to be 0.9% below its 200 day moving average of $48.94. Overall earnings estimates have been revised lower since the company's last earnings release. On Thursday, June 11, 2020 there was some notable buying of 46,037 contracts of the $60.00 call expiring on Friday, July 17, 2020. Option traders are pricing in a 4.6% move on earnings and the stock has averaged a 8.4% move in recent quarters.


General Mills, Inc. $59.21

General Mills, Inc. (GIS) is confirmed to report earnings at approximately 7:00 AM ET on Wednesday, July 1, 2020. The consensus earnings estimate is $1.04 per share on revenue of $4.89 billion and the Earnings Whisper ® number is $1.10 per share. Investor sentiment going into the company's earnings release has 69% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 25.30% with revenue increasing by 17.50%. Short interest has decreased by 9.4% since the company's last earnings release while the stock has drifted higher by 2.7% from its open following the earnings release to be 7.8% above its 200 day moving average of $54.91. Overall earnings estimates have been revised higher since the company's last earnings release. On Wednesday, June 24, 2020 there was some notable buying of 8,573 contracts of the $60.00 call expiring on Friday, July 17, 2020. Option traders are pricing in a 6.6% move on earnings and the stock has averaged a 3.0% move in recent quarters.


FedEx Corp. $130.08

FedEx Corp. (FDX) is confirmed to report earnings at approximately 4:00 PM ET on Tuesday, June 30, 2020. The consensus earnings estimate is $1.42 per share on revenue of $16.31 billion and the Earnings Whisper ® number is $1.65 per share. Investor sentiment going into the company's earnings release has 61% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 71.66% with revenue decreasing by 8.41%. Short interest has increased by 10.4% since the company's last earnings release while the stock has drifted higher by 43.9% from its open following the earnings release to be 7.6% below its 200 day moving average of $140.75. Overall earnings estimates have been revised lower since the company's last earnings release. On Thursday, June 25, 2020 there was some notable buying of 1,768 contracts of the $145.00 call expiring on Thursday, July 2, 2020. Option traders are pricing in a 4.6% move on earnings and the stock has averaged a 7.7% move in recent quarters.


Conagra Brands, Inc. $32.64

Conagra Brands, Inc. (CAG) is confirmed to report earnings at approximately 7:30 AM ET on Tuesday, June 30, 2020. The consensus earnings estimate is $0.66 per share on revenue of $3.24 billion and the Earnings Whisper ® number is $0.69 per share. Investor sentiment going into the company's earnings release has 66% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 83.33% with revenue increasing by 23.99%. Short interest has decreased by 38.3% since the company's last earnings release while the stock has drifted higher by 6.3% from its open following the earnings release to be 6.4% above its 200 day moving average of $30.68. Overall earnings estimates have been revised higher since the company's last earnings release. On Thursday, June 11, 2020 there was some notable buying of 3,239 contracts of the $29.00 put expiring on Thursday, July 2, 2020. Option traders are pricing in a 4.7% move on earnings and the stock has averaged a 10.8% move in recent quarters.


Constellation Brands, Inc. $168.99

Constellation Brands, Inc. (STZ) is confirmed to report earnings at approximately 7:30 AM ET on Wednesday, July 1, 2020. The consensus earnings estimate is $1.91 per share on revenue of $1.97 billion and the Earnings Whisper ® number is $2.12 per share. Investor sentiment going into the company's earnings release has 53% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 13.57% with revenue decreasing by 13.69%. Short interest has increased by 20.8% since the company's last earnings release while the stock has drifted higher by 25.2% from its open following the earnings release to be 5.2% below its 200 day moving average of $178.34. Overall earnings estimates have been revised lower since the company's last earnings release. On Tuesday, June 9, 2020 there was some notable buying of 888 contracts of the $195.00 call expiring on Friday, October 16, 2020. Option traders are pricing in a 3.1% move on earnings and the stock has averaged a 5.7% move in recent quarters.


Capri Holdings Limited $14.37

Capri Holdings Limited (CPRI) is confirmed to report earnings at approximately 6:30 AM ET on Wednesday, July 1, 2020. The consensus earnings estimate is $0.32 per share on revenue of $1.18 billion and the Earnings Whisper ® number is $0.34 per share. Investor sentiment going into the company's earnings release has 39% expecting an earnings beat The company's guidance was for earnings of $0.68 to $0.73 per share. Consensus estimates are for earnings to decline year-over-year by 49.21% with revenue decreasing by 12.20%. Short interest has increased by 35.1% since the company's last earnings release while the stock has drifted lower by 56.7% from its open following the earnings release to be 44.0% below its 200 day moving average of $25.67. Overall earnings estimates have been revised lower since the company's last earnings release. On Thursday, June 4, 2020 there was some notable buying of 11,042 contracts of the $17.50 put expiring on Friday, August 21, 2020. Option traders are pricing in a 10.8% move on earnings and the stock has averaged a 6.7% move in recent quarters.


X Financial $0.92

X Financial (XYF) is confirmed to report earnings at approximately 5:00 PM ET on Tuesday, June 30, 2020. The consensus earnings estimate is $0.09 per share. Investor sentiment going into the company's earnings release has 25% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 55.00% with revenue increasing by 763.52%. Short interest has increased by 1.0% since the company's last earnings release while the stock has drifted lower by 1.2% from its open following the earnings release to be 37.7% below its 200 day moving average of $1.47. Overall earnings estimates have been unchanged since the company's last earnings release. The stock has averaged a 4.9% move on earnings in recent quarters.


Acuity Brands, Inc. $84.45

Acuity Brands, Inc. (AYI) is confirmed to report earnings at approximately 8:40 AM ET on Tuesday, June 30, 2020. The consensus earnings estimate is $1.14 per share on revenue of $809.25 million and the Earnings Whisper ® number is $1.09 per share. Investor sentiment going into the company's earnings release has 42% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 51.90% with revenue decreasing by 14.60%. Short interest has increased by 48.5% since the company's last earnings release while the stock has drifted higher by 2.4% from its open following the earnings release to be 23.4% below its 200 day moving average of $110.25. Overall earnings estimates have been revised lower since the company's last earnings release. Option traders are pricing in a 9.2% move on earnings and the stock has averaged a 8.2% move in recent quarters.


Methode Electronics, Inc. $30.02

Methode Electronics, Inc. (MEI) is confirmed to report earnings at approximately 7:00 AM ET on Tuesday, June 30, 2020. The consensus earnings estimate is $0.77 per share on revenue of $211.39 million. Investor sentiment going into the company's earnings release has 45% expecting an earnings beat. Consensus estimates are for year-over-year earnings growth of 24.19% with revenue decreasing by 20.53%. Short interest has increased by 6.2% since the company's last earnings release while the stock has drifted lower by 1.7% from its open following the earnings release to be 9.0% below its 200 day moving average of $32.97. Overall earnings estimates have been revised lower since the company's last earnings release. Option traders are pricing in a 18.4% move on earnings and the stock has averaged a 8.1% move in recent quarters.


UniFirst Corporation $170.54

UniFirst Corporation (UNF) is confirmed to report earnings at approximately 8:00 AM ET on Wednesday, July 1, 2020. The consensus earnings estimate is $1.17 per share on revenue of $378.28 million and the Earnings Whisper ® number is $1.25 per share. Investor sentiment going into the company's earnings release has 44% expecting an earnings beat. Consensus estimates are for earnings to decline year-over-year by 52.44% with revenue decreasing by 16.63%. Short interest has decreased by 2.7% since the company's last earnings release while the stock has drifted higher by 14.1% from its open following the earnings release to be 8.4% below its 200 day moving average of $186.14. Overall earnings estimates have been revised lower since the company's last earnings release. The stock has averaged a 7.0% move on earnings in recent quarters.



What are you all watching for in this upcoming trading week?
I hope you all have a wonderful weekend and a great trading week ahead StockMarket.
submitted by bigbear0083 to StockMarket [link] [comments]

2 months back at trading (update) and some new questions

Hi all, I posted a thread back a few months ago when I started getting seriously back into trading after 20 years away. I thought I'd post an update with some notes on how I'm progressing. I like to type, so settle in. Maybe it'll help new traders who are exactly where I was 2 months ago, I dunno. Or maybe you'll wonder why you spent 3 minutes reading this. Risk/reward, yo.
I'm trading 5k on TastyWorks. I'm a newcomer to theta positive strategies and have done about two thirds of my overall trades in this style. However, most of my experience in trading in the past has been intraday timeframe oriented chart reading and momentum stuff. I learned almost everything "new" that I'm doing from TastyTrade, /options, /thetagang, and Option Alpha. I've enjoyed the material coming from esinvests YouTube channel quite a bit as well. The theta gang type strategies I've done have been almost entirely around binary event IV contraction (mostly earnings, but not always) and in most cases, capped to about $250 in risk per position.
The raw numbers:
Net PnL : +247
Commissions paid: -155
Fees: -42
Right away what jumps out is something that was indicated by realdeal43 and PapaCharlie9 in my previous thread. This is a tough, grindy way to trade a small account. It reminds me a little bit of when I was rising through the stakes in online poker, playing $2/4 limit holdem. Even if you're a profitable player in that game, beating the rake over the long term is very, very hard. Here, over 3 months of trading a conservative style with mostly defined risk strategies, my commissions are roughly equal to my net PnL. That is just insane, and I don't even think I've been overtrading.
55 trades total, win rate of 60%
22 neutral / other trades
Biggest wins:
Biggest losses:
This is pretty much where I expected to be while learning a bunch of new trading techniques. And no, this is not a large sample size so I have no idea whether or not I can be profitable trading this way (yet). I am heartened by the fact that I seem to be hitting my earnings trades and selling quick spikes in IV (like weed cures Corona day). I'm disheartened that I've went against my principles several times, holding trades for longer than I originally intended, or letting losses mount, believing that I could roll or manage my way out of trouble.
I still feel like I am going against my nature to some degree. My trading in years past was scalping oriented and simple. I was taught that a good trade was right almost immediately. If it went against me, I'd cut it immediately and look for a better entry. This is absolutely nothing like that. A good trade may take weeks to develop. It's been really hard for me to sit through the troughs and it's been even harder to watch an okay profit get taken out by a big swing in delta. Part of me wonders if I am cut out for this style at all and if I shouldn't just take my 5k and start trading micro futures. But that's a different post...
I'll share a couple of my meager learnings:

My new questions :

That's enough of this wall of text for now. If you made it this far, I salute you, because this shit was even longer than my last post.
submitted by bogglor to options [link] [comments]

Video Encoding in Simple Terms

Video Encoding in Simple Terms
Nowadays, it is difficult to imagine a field of human activity, in which, in one way or another, digital video has not entered. We watch it on TV, mobile devices, and stationary computers; we record it with digital cameras ourselves, or we encounter it on the roads (unpleasant, but true), in stores, hospitals, schools and universities, and in industrial enterprises of various profiles. As a consequence, words and terms that are directly related to the digital representation of video information are becoming more firmly and widely embedded in our lives. From time to time, questions arise in this area. What are the differences between various devices or programs that we use to encode/ decode digital video data, and what do they do? Which of these devices/ programs are better or worse, and in which aspects? What do all these endless MPEG-2, H.264 / AVC, VP9, H.265 / HEVC, etc. mean? Let’s try to understand.

A very brief historical reference

The first generally accepted video compression standard MPEG-2 was finally adopted in 1996, after which a rapid development of digital satellite television began. The next standard was MPEG-4 part 10 (H.264 / AVC), which provides twice the degree of video data compression. It was adopted in 2003, which led to the development of DVB-T/ C systems, Internet TV and the emergence of a variety of video sharing and video communication services. From 2010 to 2013, the Joint Collaborative Team on Video Coding (JCT-VC) was intensively working to create the next video compression standard, which was called High Efficient Video Coding (HEVC) by the developers; it ensured the following twofold increase in the compression ratio of digital video data. This standard was approved in 2013. That same year, the VP9 standard, developed by Google, was adopted, which was supposed to not yield to HEVC in its degree of video data compression.

Basic stages of video encoding

There are a few simple ideas at the core of algorithms for video data compression. If we take some part of an image (in the MPEG-2 and AVC standards this part is called a macroblock), then there is a big possibility that, near this segment in this frame or in neighboring frames, there will be a segment containing a similar image, which differs little in pixel intensity values. Thus, to transmit information about the image in the current segment, it is enough to only transfer its difference from the previously encoded similar segment. The process of finding similar segments among previously encoded images is called Prediction. A set of difference values that determine the difference between the current segment and the found prediction is called the Residual. Here we can distinguish two main types of prediction. In the first one, the Prediction values represent a set of linear combinations of pixels adjacent to the current image segment on the left and on the top. This type of prediction is called Intra Prediction. In the second one, linear combinations of pixels of similar image segments from previously encoded frames are used as prediction (these frames are called Reference). This type of prediction is called Inter Prediction. To restore the image of the current segment, encoded with Inter prediction, when decoding, it is necessary to have information about not only the Residual, but also the frame number, where a similar segment is located, and the coordinates of this segment.
Residual values obtained during prediction obviously contain, on average, less information than the original image and, therefore, require a fewer quantity of bits for image transmission. To further increase the degree of compression of video data in video coding systems, some spectral transformation is used. Typically, this is Fourier cosine transform. Such transformation allows us to select the fundamental harmonics in two-dimensional Residual signal. Such a selection is made at the next stage of coding — quantization. The sequence of quantized spectral coefficients contains a small number of main, large values. The remaining values are very likely to be zero. As a result, the amount of information contained in quantized spectral coefficients is significantly (dozens of times) lower than in the original image.
In the next stage of coding, the obtained set of quantized spectral coefficients, accompanied by the information necessary for performing prediction when decoding, is subjected to entropy coding. The bottom line here is to align the most common values of the encoded stream with the shortest codeword (containing the smallest number of bits). The best compression ratio (close to theoretically achievable) at this stage is provided by arithmetic coding algorithms, which are mainly used in modern video compression systems.
From the above, the main factors affecting the effectiveness of a particular video compression system become apparent. First of all, these are, of course, the factors that determine the effectiveness of the Intra and Inter Predictions. The second set of factors is related to the orthogonal transformation and quantization, which selects the fundamental harmonics in the Residual signal. The third one is determined by the volume and compactness of the representation of additional information accompanying Residual and necessary for making predictions, that is, calculating Prediction, in the decoder. Finally, the fourth set has the factors that determine the effectiveness of the final stage- entropy coding.
Let’s illustrate some possible options (far from all) of the implementation of the coding stages listed above, on the example of H.264 / AVC and HEVC.

AVC Standard

In the AVC standard, the basic structural unit of the image is a macroblock — a square area of 16x16 pixels (Figure 1). When searching for the best possible prediction, the encoder can select one of several options of partitioning each macroblock. With Intra-prediction, there are three options: perform a prediction for the entire block as a whole, break the macroblock into four square blocks of 8x8 size, or into 16 blocks with a size of 4x4 pixels, and perform a prediction for each such block independently. The number of possible options of macroblock partitioning under Inter-prediction is much richer (Figure 1), which provides adaptation of the size and position of the predicted blocks to the position and shape of the object boundaries moving in the video frame.
Fig 1. Macroblocks in AVC and possible partitioning when using Inter-Prediction.
In AVC, pixel values from the column to the left of the predicted block and the row of pixels immediately above it are used for Intra prediction (Figure 2). For blocks of sizes 4x4 and 8x8, 9 methods of prediction are used. In a prediction called DC, all calculated pixels have a single value equal to the arithmetic average of the “neighbor pixels” highlighted in Fig. 2 with a bold line. In other modes, “angular” prediction is performed. In this case, the values of the “neighbor pixels” are placed inside the predicted block in the directions indicated in Fig. 2.
In the event that the predicted pixel gets between “neighbor pixels”, when moving in a given direction, an interpolated value is used for the prediction. For blocks with a size of 16x16 pixels, 4 methods of prediction are used. One of them is the DC-prediction, which was already reviewed. The other two correspond to the “angular” methods, with the directions of prediction 0 and 1. Finally, the fourth — Plane-prediction: the values of the predicted pixels are determined by the equation of the plane. The angular coefficients of the equation are determined by the values of the “neighboring pixels”.
Fig 2. “Neighboring pixels” and angular modes of Intra-Prediction in AVC
Inter- Prediction in AVC can be implemented in one of two ways. Each of these options determines the type of macroblock (P or B). As a prediction of pixel values in P-blocks (Predictive-blocks), the values of pixels from the area located on the previously coded (reference) image, are used. Reference images are not deleted from the RAM buffer, containing decoded frames (decoded picture buffer, or DPB), as long as they are needed for Inter-prediction. A reference list is created in the DPB from the indexes of these images.
The encoder signals to the decoder about the number of the reference image in the list and about the offset of the area used for prediction, with respect to the position of predicted block (this displacement is called motion vector). The offset can be determined with an accuracy of ¼ pixel. In case of prediction with non-integer offset, interpolation is performed. Different blocks in one image can be predicted by areas located on different reference images.
In the second option of Inter Prediction, prediction of the B-block pixel values (bi-predictive block), two reference images are used; their indexes are placed in two lists (list0 and list1) in the DPB. The two indexes of reference images in the lists and two offsets, that determine positions of reference areas, are transmitted to the decoder. The B-block pixel values are calculated as a linear combination of pixel values from the reference areas. For non-integer offsets, interpolation of reference image is used.
As already mentioned, after predicting the values of the encoded block and calculating the Residual signal, the next coding step is spectral transformation. In AVC, there are several options for orthogonal transformations of the Residual signal. When Intra-prediction of a whole macroblock with a size of 16x16 is implemented, the residual signal is divided into 4x4 pixel blocks; each of them is subjected to an integer analog of discrete two-dimensional 4x4 cosine Fourier transform.
The resulting spectral components, corresponding to zero frequency (DC) in each block, are then subjected to additional orthogonal Walsh-Hadamard transform. With Inter-prediction, the Residual signal is divided into blocks of 4x4 pixels or 8x8 pixels. Each block is then subjected to a 4x4 or 8x8 (respectively) two-dimensional discrete cosine Fourier Transform (DCT, from Discrete Cosine Transform).
In the next step, spectral coefficients are subjected to the quantization procedure. This leads to a decrease in bit capacity of digits representing the spectral sample values, and to a significant increase in the number of samples having zero values. These effects provide compression, i.e. reduce the number and bit capacity of digits representing the encoded image. The reverse side of quantization is the distortion of the encoded image. It is clear that the larger the quantization step, the greater is the compression ratio, but also the distortion is greater.
The final stage of encoding in AVC is entropy coding, implemented by the algorithms of Context Adaptive Binary Arithmetic Coding. This stage provides additional compression of video data without distortion in the encoded image.

Ten years later. HEVC standard: what’s new?

The new H.265/HEVC standard is the development of methods and algorithms for compressing video data embedded in H.264/AVC. Let’s briefly review the main differences.
An analog of a macroblock in HEVC is the Coding Unit (CU). Within each block, areas for calculation of Prediction are selected — Prediction Unit (PU). Each CU also specifies the limits within which the areas for calculating the discrete orthogonal transformation from the residual signal are selected. These areas are called the Transform Unit (TU).
The main distinguishing feature of HEVC here is that the split of a video frame into CU is conducted adaptively, so that it is possible to adjust the CU boundaries to the boundaries of objects on the image (Figure 3). Such adaptability allows to achieve an exceptionally high quality of prediction and, as a consequence, a low level of the residual signal.
An undoubted advantage of such an adaptive approach to frame division into blocks is also an extremely compact description of the partition structure. For the entire video sequence, the maximum and minimum possible CU sizes are set (for example, 64x64 is the maximum possible CU, 8x8 is the minimum). The entire frame is covered with the maximum possible CUs, left to right, top-to-bottom.
It is obvious that, for such coverage, transmission of any information is not required. If partition is required within any CU, then this is indicated by a single flag (Split Flag). If this flag is set to 1, then this CU is divided into 4 CUs (with a maximum CU size of 64x64, after partitioning we get 4 CUs of size 32x32 each).
For each of the CUs received, a Split Flag value of 0 or 1 can, in turn, be transmitted. In the latter case, this CU is again divided into 4 CUs of smaller size. The process continues recursively until the Split Flag of all received CUs is equal to 0 or until the minimum possible CU size is reached. Inserted CUs thus form a quad tree (Coding Tree Units, CTU). As already mentioned, within each CU, areas for calculating prediction- Prediction Units (PU) are selected. With Intra Prediction, the CU area can coincide with the PU (2Nx2N mode) or it can be divided into 4 square PUs of twice smaller size (NxN mode, available only for CU of minimum size). With Inter Prediction, there are eight possible options for partitioning each CU into PUs (Figure 3).
Fig.3 Video frame partitioning into CUs is conducted adaptively
The idea of spatial prediction in HEVC remained the same as in AVC. Linear combinations of neighboring pixel values, adjacent to the block on the left and above, are used as predicted sample values in the PU block. However, the set of methods for spatial prediction in HEVC has become significantly richer. In addition to Planar (analogue to Plane in AVC) and DC methods, each PU can be predicted by one of the 33 ways of “angular” prediction. That is, the number of ways, in which the values are calculated by “neighbor”-pixels, is increased by 4 times.
Fig. 4. Possible partitioning of the Coding Unit into Prediction Units with the spatial (Intra) and temporary (Inter) CU prediction modes
We can point out two main differences of Inter- prediction between HEVC and AVC. Firstly, HEVC uses better interpolation filters (with a longer impulse response) when calculating reference images with non-integer offset. The second difference concerns the way the information about the reference area, required by the decoder for performing the prediction, is presented. In HEVC, a “merge mode” is introduced, where different PUs, with the same offsets of reference areas, are combined. For the entire combined area, information about motion (motion vector) is transmitted in the stream once, which allows a significant reduction in the amount of information transmitted.
In HEVC, the size of the discrete two-dimensional transformation, to which the Residual signal is subjected, is determined by the size of the square area called the Transform Unit (TU). Each CU is the root of the TU quad tree. Thus, the TU of the upper level coincides with the CU. The root TU can be divided into 4 parts of half the size, each of which, in turn, is a TU and can be further divided.
The size of discrete transformation is determined by the TU size of the lower level. In HEVC, transforms for blocks of 4 sizes are defined: 4x4, 8x8, 16x16, and 32x32. These transformations are integer analogs of the discrete two-dimensional Fourier cosine transform of corresponding size. For size 4x4 TU with Intra-prediction, there is also a separate discrete transformation, which is an integer analogue of the discrete sine Fourier transform.
The ideas of the procedure of quantizing spectral coefficients of Residual signal, and also entropy coding in AVC and in HEVC, are practically identical.
Let’s note one more point which was not mentioned before. The quality of decoded images and the degree of video data compression are influenced significantly by post-filtering, which decoded images with Inter-prediction undergo before they are placed in the DPB.
In AVC, there is one kind of such filtering — deblocking filter. Application of this filter reduces the block effect resulting from quantization of spectral coefficients after orthogonal transformation of Residual signal.
In HEVC, a similar deblocking filter is used. Besides, an additional non-linear filtering procedure called the Sample Adaptive Offset (SAO) exists. Based on the analysis of pixel value distribution during encoding, a table of corrective offsets, added to the values of a part of CU pixels during decoding, is determined.
In HEVC, the size of the discrete two-dimensional transformation, to which the Residual signal is subjected, is determined by the size of the square area called the Transform Unit (TU). Each CU is the quad-tree of TU’s. Thus, the TU of the upper level coincides with the CU. The root TU can be divided into 4 parts of half the size, each of which, in turn, is a TU and can be further divided.
The size of discrete transformation is determined by the TU size of the lower level. There are four transform block sizes in HEVC: 4x4, 8x8, 16x16, and 32x32. These transforms are discrete two-dimensional Fourier cosine transform of corresponding size. For 4x4 Intra-predicted blocks, could be used another discrete transform — sine Fourier transform.
The quantization of spectral coefficients of residual signal, and entropy coding in AVC and in HEVC, are almost identical.
Let’s note one more point which was not mentioned before. The quality of decoded images, hence the degree of video data compression, is influenced significantly by post-filtering, which applied on decoded Inter-predicted images before they are placed in the DPB.
In AVC, there is one kind of such filtering — deblocking filter. It masking blocking artifacts effect originating from spectral coefficients quantization after orthogonal transformation of residual signal.
In HEVC, a similar deblocking filter is used. Besides, an additional non-linear filtering procedure called the Sample Adaptive Offset (SAO) exists. Sample level correction is based either on local neighborhood or on the intensity level of sample itself. Table of sample level corrections, added to the values of a part of CU pixels during decoding, is determined.

And what is the result?

Figures 4–7 show the results of encoding of several high-resolution (HD) video sequences by two encoders. One of the encoders compresses the video data in the H.265/HEVC standard (marked as HM on all the graphs), and the second one is in the H.264/AVC standard.
Fig. 5. Encoding results of the video sequence Aspen (1920x1080 30 frames per second)
Fig. 6. Encoding results of the video sequence BlueSky (1920x1080 25 frames per second)
Fig. 7. Encoding results of the video sequence PeopleOnStreet (1920x1080 30 frames per second)
Fig. 8. Encoding results of the video sequence Traffic (1920x1080 30 frames per second)
Coding was performed at different quantization values of spectral coefficients, hence with different levels of video image distortion. The results are presented in Bitrate (mbps) — PSNR(dB) coordinates. PSNR values characterize the degree of distortion.
On average, it can be stated that the PSNR range below 36 dB corresponds to a high level of distortion, i.e. low quality video images. The range of 36 to 40 dB corresponds to the average quality. With PSNR values above 40 dB, we can call it a high video quality.
We can roughly estimate the compression ratio provided by the encoding systems. In the medium quality area, the bit rate provided by the HEVC encoder is about 1.5 times less than the bit rate of the AVC encoder. Bitrate of an uncompressed video stream is easily determined as the product of the number of pixels in each video frame (1920 x 1080) by the number of bits required to represent each pixel (8 + 2 + 2 = 12), and the number of frames per second (30).
As a result, we get about 750 Mbps. It can be seen from the graphs that, in the area of average quality, the AVC encoder provides a bit rate of about 10–12 Mbit/s. Thus, the degree of video information compression is about 60–75 times. As already mentioned, the HEVC encoder provides compression ratio 1.5 times higher.

About the author

Oleg Ponomarev, 16 years in video encoding and signal digital processing, expert in Statistical Radiophysics, Radio waves propagation. Assistant Professor, PhD at Tomsk State University, Radiophysics department. Head of Elecard Research Lab.
submitted by VideoCompressionGuru to u/VideoCompressionGuru [link] [comments]


glimpse into the future of Roblox

Our vision to bring the world together through play has never been more relevant than it is now. As our founder and CEO, David Baszucki (a.k.a. Builderman), mentioned in his keynote, more and more people are using Roblox to stay connected with their friends and loved ones. He hinted at a future where, with our automatic machine translation technology, Roblox will one day act as a universal translator, enabling people from different cultures and backgrounds to connect and learn from each other.
During his keynote, Builderman also elaborated upon our vision to build the Metaverse; the future of avatar creation on the platform (infinitely customizable avatars that allow any body, any clothing, and any animation to come together seamlessly); more personalized game discovery; and simulating large social gatherings (like concerts, graduations, conferences, etc.) with tens of thousands of participants all in one server. We’re still very early on in this journey, but if these past five months have shown us anything, it’s clear that there is a growing need for human co-experience platforms like Roblox that allow people to play, create, learn, work, and share experiences together in a safe, civil 3D immersive space.
Up next, our VP of Developer Relations, Matt Curtis (a.k.a. m4rrh3w), shared an update on all the things we’re doing to continue empowering developers to create innovative and exciting content through collaboration, support, and expertise. He also highlighted some of the impressive milestones our creator community has achieved since last year’s RDC. Here are a few key takeaways:
And lastly, our VP of Engineering, Technology, Adam Miller (a.k.a. rbadam), unveiled a myriad of cool and upcoming features developers will someday be able to sink their teeth into. We saw a glimpse of procedural skies, skinned meshes, more high-quality materials, new terrain types, more fonts in Studio, a new asset type for in-game videos, haptic feedback on mobile, real-time CSG operations, and many more awesome tools that will unlock the potential for even bigger, more immersive experiences on Roblox.


Despite the virtual setting, RDC just wouldn’t have been the same without any fun party activities and networking opportunities. So, we invited special guests DJ Hyper Potions and cyber mentalist Colin Cloud for some truly awesome, truly mind-bending entertainment. Yoga instructor Erin Gilmore also swung by to inspire attendees to get out of their chair and get their body moving. And of course, we even had virtual rooms dedicated to karaoke and head-to-head social games, like trivia and Pictionary.
Over on the networking side, Team Adopt Me, Red Manta, StyLiS Studios, and Summit Studios hosted a virtual booth for attendees to ask questions, submit resumes, and more. We also had a networking session where three participants would be randomly grouped together to get to know each other.

What does Roblox mean to you?

We all know how talented the Roblox community is from your creations. We’ve heard plenty of stories over the years about how Roblox has touched your lives, how you’ve made friendships, learned new skills, or simply found a place where you can be yourself. We wanted to hear more. So, we asked attendees: What does Roblox mean to you? How has Roblox connected you? How has Roblox changed your life? Then, over the course of RDC, we incorporated your responses into this awesome mural.
Created by Alece Birnbach at Graphic Recording Studio

Knowledge is power

This year’s breakout sessions included presentations from Roblox developers and staff members on the latest game development strategies, a deep dive into the Roblox engine, learning how to animate with Blender, tools for working together in teams, building performant game worlds, and the new Creator Dashboard. Dr. Michael Rich, Associate Professor at Harvard Medical School and Physician at Boston Children’s Hospital, also led attendees through a discussion on mental health and how to best take care of you and your friends’ emotional well-being, especially now during these challenging times.
Making the Dream Work with Teamwork (presented by Roblox developer Myzta)
In addition to our traditional Q&A panel with top product and engineering leaders at Roblox, we also held a special session with Builderman himself to answer the community’s biggest questions.
Roblox Product and Engineering Q&A Panel

2020 Game Jam

The Game Jam is always one of our favorite events of RDC. It’s a chance for folks to come together, flex their development skills, and come up with wildly inventive game ideas that really push the boundaries of what’s possible on Roblox. We had over 60 submissions this year—a new RDC record.
Once again, teams of up to six people from around the world had less than 24 hours to conceptualize, design, and publish a game based on the theme “2020 Vision,” all while working remotely no less! To achieve such a feat is nothing short of awe-inspiring, but as always, our dev community was more than up for the challenge. I’ve got to say, these were some of the finest creations we’ve seen.
Best in Show: Shapescape Created By: GhettoMilkMan, dayzeedog, maplestick, theloudscream, Brick_man, ilyannna You awaken in a strange laboratory, seemingly with no way out. Using a pair of special glasses, players must solve a series of anamorphic puzzles and optical illusions to make their escape.
Excellence in Visual Art: agn●sia Created By: boatbomber, thisfall, Elttob An obby experience unlike any other, this game is all about seeing the world through a different lens. Reveal platforms by switching between different colored lenses and make your way to the end.
Most Creative Gameplay: Visions of a perspective reality Created By: Noble_Draconian and Spathi Sometimes all it takes is a change in perspective to solve challenges. By switching between 2D and 3D perspectives, players can maneuver around obstacles or find new ways to reach the end of each level.
Outstanding Use of Tech: The Eyes of Providence Created By: Quenty, Arch_Mage, AlgyLacey, xJennyBeanx, Zomebody, Crykee This action/strategy game comes with a unique VR twist. While teams fight to construct the superior monument, two VR players can support their minions by collecting resources and manipulating the map.
Best Use of Theme: Sticker Situation Created By: dragonfrosting and Yozoh Set in a mysterious art gallery, players must solve puzzles by manipulating the environment using a magic camera and stickers. Snap a photograph, place down a sticker, and see how it changes the world.
For the rest of the 2020 Game Jam submissions, check out the list below:
20-20 Vision | 20/20 Vision | 2020 Vision, A Crazy Perspective | 2020 Vision: Nyon | A Wild Trip! | Acuity | Best Year Ever | Better Half | Bloxlabs | Climb Stairs to 2021 | Double Vision (Team hey apple) | Eyebrawl | Eyeworm Exam | FIRE 2020 | HACKED | Hyperspective | Lucid Scream | Mystery Mansion | New Years at the Museum | New Year’s Bash | Poor Vision | Predict 2020 | RBC News | Retrovertigo | Second Wave | see no evil | Sight Fight | Sight Stealers | Spectacles Struggle | Specter Spectrum | Survive 2020 | The Lost Chicken Leg | The Outbreak | The Spyglass | Time Heist | Tunnel Vision | Virtual RDC – The Story | Vision (Team Freepunk) | Vision (Team VIP People ####) | Vision Developers Conference 2020 | Vision Is Key | Vision Perspective | Vision Racer | Visions | Zepto
And last but not least, we wanted to give a special shout out to Starboard Studios. Though they didn’t quite make it on time for our judges, we just had to include Dave’s Vision for good measure. 📷
Thanks to everyone who participated in the Game Jam, and congrats to all those who took home the dub in each of our categories this year. As the winners of Best in Show, the developers of Shapescape will have their names forever engraved on the RDC Game Jam trophy back at Roblox HQ. Great work!

‘Til next year

And that about wraps up our coverage of the first-ever digital RDC. Thanks to all who attended! Before we go, we wanted to share a special “behind the scenes” video from the 2020 RDC photoshoot.
Check it out:
It was absolutely bonkers. Getting 350 of us all in one server was so much fun and really brought back the feeling of being together with everyone again. That being said, we can’t wait to see you all—for real this time—at RDC next year. It’s going to be well worth the wait. ‘Til we meet again, my friends.
© 2020 Roblox Corporation. All Rights Reserved.

Improving Simulation and Performance with an Advanced Physics Solver


05, 2020

by chefdeletat
📷In mid-2015, Roblox unveiled a major upgrade to its physics engine: the Projected Gauss-Seidel (PGS) physics solver. For the first year, the new solver was optional and provided improved fidelity and greater performance compared to the previously used spring solver.
In 2016, we added support for a diverse set of new physics constraints, incentivizing developers to migrate to the new solver and extending the creative capabilities of the physics engine. Any new places used the PGS solver by default, with the option of reverting back to the classic solver.
We ironed out some stability issues associated with high mass differences and complex mechanisms by the introduction of the hybrid LDL-PGS solver in mid-2018. This made the old solver obsolete, and it was completely disabled in 2019, automatically migrating all places to the PGS.
In 2019, the performance was further improved using multi-threading that splits the simulation into jobs consisting of connected islands of simulating parts. We still had performance issues related to the LDL that we finally resolved in early 2020.
The physics engine is still being improved and optimized for performance, and we plan on adding new features for the foreseeable future.

Implementing the Laws of Physics

The main objective of a physics engine is to simulate the motion of bodies in a virtual environment. In our physics engine, we care about bodies that are rigid, that collide and have constraints with each other.
A physics engine is organized into two phases: collision detection and solving. Collision detection finds intersections between geometries associated with the rigid bodies, generating appropriate collision information such as collision points, normals and penetration depths. Then a solver updates the motion of rigid bodies under the influence of the collisions that were detected and constraints that were provided by the user.
The motion is the result of the solver interpreting the laws of physics, such as conservation of energy and momentum. But doing this 100% accurately is prohibitively expensive, and the trick to simulating it in real-time is to approximate to increase performance, as long as the result is physically realistic. As long as the basic laws of motion are maintained within a reasonable tolerance, this tradeoff is completely acceptable for a computer game simulation.

Taking Small Steps

The main idea of the physics engine is to discretize the motion using time-stepping. The equations of motion of constrained and unconstrained rigid bodies are very difficult to integrate directly and accurately. The discretization subdivides the motion into small time increments, where the equations are simplified and linearized making it possible to solve them approximately. This means that during each time step the motion of the relevant parts of rigid bodies that are involved in a constraint is linearly approximated.
Although a linearized problem is easier to solve, it produces drift in a simulation containing non-linear behaviors, like rotational motion. Later we’ll see mitigation methods that help reduce the drift and make the simulation more plausible.


Having linearized the equations of motion for a time step, we end up needing to solve a linear system or linear complementarity problem (LCP). These systems can be arbitrarily large and can still be quite expensive to solve exactly. Again the trick is to find an approximate solution using a faster method. A modern method to approximately solve an LCP with good convergence properties is the Projected Gauss-Seidel (PGS). It is an iterative method, meaning that with each iteration the approximate solution is brought closer to the true solution, and its final accuracy depends on the number of iterations.
This animation shows how a PGS solver changes the positions of the bodies at each step of the iteration process, the objective being to find the positions that respect the ball and socket constraints while preserving the center of mass at each step (this is a type of positional solver used by the IK dragger). Although this example has a simple analytical solution, it’s a good demonstration of the idea behind the PGS. At each step, the solver fixes one of the constraints and lets the other be violated. After a few iterations, the bodies are very close to their correct positions. A characteristic of this method is how some rigid bodies seem to vibrate around their final position, especially when coupling interactions with heavier bodies. If we don’t do enough iterations, the yellow part might be left in a visibly invalid state where one of its two constraints is dramatically violated. This is called the high mass ratio problem, and it has been the bane of physics engines as it causes instabilities and explosions. If we do too many iterations, the solver becomes too slow, if we don’t it becomes unstable. Balancing the two sides has been a painful and long process.

Mitigation Strategies

📷A solver has two major sources of inaccuracies: time-stepping and iterative solving (there is also floating point drift but it’s minor compared to the first two). These inaccuracies introduce errors in the simulation causing it to drift from the correct path. Some of this drift is tolerable like slightly different velocities or energy loss, but some are not like instabilities, large energy gains or dislocated constraints.
Therefore a lot of the complexity in the solver comes from the implementation of methods to minimize the impact of computational inaccuracies. Our final implementation uses some traditional and some novel mitigation strategies:
  1. Warm starting: starting with the solution from a previous time-step to increase the convergence rate of the iterative solver
  2. Post-stabilization: reprojecting the system back to the constraint manifold to prevent constraint drift
  3. Regularization: adding compliance to the constraints ensuring a solution exists and is unique
  4. Pre-conditioning: using an exact solution to a linear subsystem, improving the stability of complex mechanisms
Strategies 1, 2 and 3 are pretty traditional, but 3 has been improved and perfected by us. Also, although 4 is not unheard of, we haven’t seen any practical implementation of it. We use an original factorization method for large sparse constraint matrices and a new efficient way of combining it with the PGS. The resulting implementation is only slightly slower compared to pure PGS but ensures that the linear system coming from equality constraints is solved exactly. Consequently, the equality constraints suffer only from drift coming from the time discretization. Details on our methods are contained in my GDC 2020 presentation. Currently, we are investigating direct methods applied to inequality constraints and collisions.

Getting More Details

Traditionally there are two mathematical models for articulated mechanisms: there are reduced coordinate methods spearheaded by Featherstone, that parametrize the degrees of freedom at each joint, and there are full coordinate methods that use a Lagrangian formulation.
We use the second formulation as it is less restrictive and requires much simpler mathematics and implementation.
The Roblox engine uses analytical methods to compute the dynamic response of constraints, as opposed to penalty methods that were used before. Analytics methods were initially introduced in Baraff 1989, where they are used to treat both equality and non-equality constraints in a consistent manner. Baraff observed that the contact model can be formulated using quadratic programming, and he provided a heuristic solution method (which is not the method we use in our solver).
Instead of using force-based formulation, we use an impulse-based formulation in velocity space, originally introduced by Mirtich-Canny 1995 and further improved by Stewart-Trinkle 1996, which unifies the treatment of different contact types and guarantees the existence of a solution for contacts with friction. At each timestep, the constraints and collisions are maintained by applying instantaneous changes in velocities due to constraint impulses. An excellent explanation of why impulse-based simulation is superior is contained in the GDC presentation of Catto 2014.
The frictionless contacts are modeled using a linear complementarity problem (LCP) as described in Baraff 1994. Friction is added as a non-linear projection onto the friction cone, interleaved with the iterations of the Projected Gauss-Seidel.
The numerical drift that introduces positional errors in the constraints is resolved using a post-stabilization technique using pseudo-velocities introduced by Cline-Pai 2003. It involves solving a second LCP in the position space, which projects the system back to the constraint manifold.
The LCPs are solved using a PGS / Impulse Solver popularized by Catto 2005 (also see Catto 2009). This method is iterative and considers each individual constraints in sequence and resolves it independently. Over many iterations, and in ideal conditions, the system converges to a global solution.
Additionally, high mass ratio issues in equality constraints are ironed out by preconditioning the PGS using the sparse LDL decomposition of the constraint matrix of equality constraints. Dense submatrices of the constraint matrix are sparsified using a method we call Body Splitting. This is similar to the LDL decomposition used in Baraff 1996, but allows more general mechanical systems, and solves the system in constraint space. For more information, you can see my GDC 2020 presentation.
The architecture of our solver follows the idea of Guendelman-Bridson-Fedkiw, where the velocity and position stepping are separated by the constraint resolution. Our time sequencing is:
  1. Advance velocities
  2. Constraint resolution in velocity space and position space
  3. Advance positions
This scheme has the advantage of integrating only valid velocities, and limiting latency in external force application but allowing a small amount of perceived constraint violation due to numerical drift.
An excellent reference for rigid body simulation is the book Erleben 2005 that was recently made freely available. You can find online lectures about physics-based animation, a blog by Nilson Souto on building a physics engine, a very good GDC presentation by Erin Catto on modern solver methods, and forums like the Bullet Physics Forum and GameDev which are excellent places to ask questions.

In Conclusion

The field of game physics simulation presents many interesting problems that are both exciting and challenging. There are opportunities to learn a substantial amount of cool mathematics and physics and to use modern optimizations techniques. It’s an area of game development that tightly marries mathematics, physics and software engineering.
Even if Roblox has a good rigid body physics engine, there are areas where it can be improved and optimized. Also, we are working on exciting new projects like fracturing, deformation, softbody, cloth, aerodynamics and water simulation.
Neither Roblox Corporation nor this blog endorses or supports any company or service. Also, no guarantees or promises are made regarding the accuracy, reliability or completeness of the information contained in this blog.
This blog post was originally published on the Roblox Tech Blog.
© 2020 Roblox Corporation. All Rights Reserved.

Using Clang to Minimize Global Variable Use


23, 2020

by RandomTruffle
Every non-trivial program has at least some amount of global state, but too much can be a bad thing. In C++ (which constitutes close to 100% of Roblox’s engine code) this global state is initialized before main() and destroyed after returning from main(), and this happens in a mostly non-deterministic order. In addition to leading to confusing startup and shutdown semantics that are difficult to reason about (or change), it can also lead to severe instability.
Roblox code also creates a lot of long-running detached threads (threads which are never joined and just run until they decide to stop, which might be never). These two things together have a very serious negative interaction on shutdown, because long-running threads continue accessing the global state that is being destroyed. This can lead to elevated crash rates, test suite flakiness, and just general instability.
The first step to digging yourself out of a mess like this is to understand the extent of the problem, so in this post I’m going to talk about one technique you can use to gain visibility into your global startup flow. I’m also going to discuss how we are using this to improve stability across the entire Roblox game engine platform by decreasing our use of global variables.

Introducing -finstrument-functions

Nothing excites me more than learning about a new obscure compiler option that I’ve never had a use for before, so I was pretty happy when a colleague pointed me to this option in the Clang Command Line Reference. I’d never used it before, but it sounded very cool. The idea being that if we could get the compiler to tell us every time it entered and exited a function, we could filter this information through a symbolizer of some kind and generate a report of functions that a) occur before main(), and b) are the very first function in the call-stack (indicating it’s a global).
Unfortunately, the documentation basically just tells you that the option exists with no mention of how to use it or if it even actually does what it sounds like it does. There’s also two different options that sound similar to each other (-finstrument-functions and -finstrument-functions-after-inlining), and I still wasn’t entirely sure what the difference was. So I decided to throw up a quick sample on godbolt to see what happened, which you can see here. Note there are two assembly outputs for the same source listing. One uses the first option and the other uses the second option, and we can compare the assembly output to understand the differences. We can gather a few takeaways from this sample:
  1. The compiler is injecting calls to __cyg_profile_func_enter and __cyg_profile_func_exit inside of every function, inline or not.
  2. The only difference between the two options occurs at the call-site of an inline function.
  3. With -finstrument-functions, the instrumentation for the inlined function is inserted at the call-site, whereas with -finstrument-functions-after-inlining we only have instrumentation for the outer function. This means that when using-finstrument-functions-after-inlining you won’t be able to determine which functions are inlined and where.
Of course, this sounds exactly like what the documentation said it did, but sometimes you just need to look under the hood to convince yourself.
To put all of this another way, if we want to know about calls to inline functions in this trace we need to use -finstrument-functions because otherwise their instrumentation is silently removed by the compiler. Sadly, I was never able to get -finstrument-functions to work on a real example. I would always end up with linker errors deep in the Standard C++ Library which I was unable to figure out. My best guess is that inlining is often a heuristic, and this can somehow lead to subtle ODR (one-definition rule) violations when the optimizer makes different inlining decisions from different translation units. Luckily global constructors (which is what we care about) cannot possibly be inlined anyway, so this wasn’t a problem.
I suppose I should also mention that I still got tons of linker errors with -finstrument-functions-after-inlining as well, but I did figure those out. As best as I can tell, this option seems to imply –whole-archive linker semantics. Discussion of –whole-archive is outside the scope of this blog post, but suffice it to say that I fixed it by using linker groups (e.g. -Wl,–start-group and -Wl,–end-group) on the compiler command line. I was a bit surprised that we didn’t get these same linker errors without this option and still don’t totally understand why. If you happen to know why this option would change linker semantics, please let me know in the comments!

Implementing the Callback Hooks

If you’re astute, you may be wondering what in the world __cyg_profile_func_enter and __cyg_profile_func_exit are and why the program is even successfully linking in the first without giving undefined symbol reference errors, since the compiler is apparently trying to call some function we’ve never defined. Luckily, there are some options that allow us to see inside the linker’s algorithm so we can find out where it’s getting this symbol from to begin with. Specifically, -y should tell us how the linker is resolving . We’ll try it with a dummy program first and a symbol that we’ve defined ourselves, then we’ll try it with __cyg_profile_func_enter .
[email protected]:~/src/sandbox$ cat instr.cpp int main() {} [email protected]:~/src/sandbox$ clang++-9 -fuse-ld=lld -Wl,-y -Wl,main instr.cpp /usbin/../lib/gcc/x86_64-linux-gnu/crt1.o: reference to main /tmp/instr-5b6c60.o: definition of main
No surprises here. The C Runtime Library references main(), and our object file defines it. Now let’s see what happens with __cyg_profile_func_enter and -finstrument-functions-after-inlining.
[email protected]:~/src/sandbox$ clang++-9 -fuse-ld=lld -finstrument-functions-after-inlining -Wl,-y -Wl,__cyg_profile_func_enter instr.cpp /tmp/instr-8157b3.o: reference to __cyg_profile_func_enter /lib/x86_64-linux-gnu/ shared definition of __cyg_profile_func_enter
Now, we see that libc provides the definition, and our object file references it. Linking works a bit differently on Unix-y platforms than it does on Windows, but basically this means that if we define this function ourselves in our cpp file, the linker will just automatically prefer it over the shared library version. Working godbolt link without runtime output is here. So now you can kind of see where this is going, however there are still a couple of problems left to solve.
  1. We don’t want to do this for a full run of the program. We want to stop as soon as we reach main.
  2. We need a way to symbolize this trace.
The first problem is easy to solve. All we need to do is compare the address of the function being called to the address of main, and set a flag indicating we should stop tracing henceforth. (Note that taking the address of main is undefined behavior[1], but for our purposes it gets the job done, and we aren’t shipping this code, so ¯\_(ツ)_/¯). The second problem probably deserves a little more discussion though.

Symbolizing the Traces

In order to symbolize these traces, we need two things. First, we need to store the trace somewhere on persistent storage. We can’t expect to symbolize in real time with any kind of reasonable performance. You can write some C code to save the trace to some magic filename, or you can do what I did and just write it to stderr (this way you can pipe stderr to some file when you run it).
Second, and perhaps more importantly, for every address we need to write out the full path to the module the address belongs to. Your program loads many shared libraries, and in order to translate an address into a symbol, we have to know which shared library or executable the address actually belongs to. In addition, we have to be careful to write out the address of the symbol in the file on disk. When your program is running, the operating system could have loaded it anywhere in memory. And if we’re going to symbolize it after the fact we need to make sure we can still reference it after the information about where it was loaded in memory is lost. The linux function dladdr() gives us both pieces of information we need. A working godbolt sample with the exact implementation of our instrumentation hooks as they appear in our codebase can be found here.

Putting it All Together

Now that we have a file in this format saved on disk, all we need to do is symbolize the addresses. addr2line is one option, but I went with llvm-symbolizer as I find it more robust. I wrote a Python script to parse the file and symbolize each address, then print it in the same “visual” hierarchical format that the original output file is in. There are various options for filtering the resulting symbol list so that you can clean up the output to include only things that are interesting for your case. For example, I filtered out any globals that have boost:: in their name, because I can’t exactly go rewrite boost to not use global variables.
The script isn’t as simple as you would think, because simply crawling each line and symbolizing it would be unacceptably slow (when I tried this, it took over 2 hours before I finally killed the process). This is because the same address might appear thousands of times, and there’s no reason to run llvm-symbolizer against the same address multiple times. So there’s a lot of smarts in there to pre-process the address list and eliminate duplicates. I won’t discuss the implementation in more detail because it isn’t super interesting. But I’ll do even better and provide the source!
So after all of this, we can run any one of our internal targets to get the call tree, run it through the script, and then get output like this (actual output from a Roblox process, source file information removed):
excluded_symbols = [‘.\boost.*’]* excluded_modules = [‘/usr.\’]* /uslib/x86_64-linux-gnu/ 140 unique addresses InterestingRobloxProcess: 38928 unique addresses /uslib/x86_64-linux-gnu/ 1 unique addresses /uslib/x86_64-linux-gnu/ 3 unique addresses Printing call tree with depth 2 for 29276 global variables. __cxx_global_var_init.5 (InterestingFile1.cpp:418:22) RBX::InterestingRobloxClass2::InterestingRobloxClass2() (InterestingFile2.cpp.:415:0) __cxx_global_var_init.19 (InterestingFile2.cpp:183:34) (anonymous namespace)::InterestingRobloxClass2::InterestingRobloxClass2() (InterestingFile2.cpp:171:0) __cxx_global_var_init.274 (InterestingFile3.cpp:2364:33) RBX::InterestingRobloxClass3::InterestingRobloxClass3()
So there you have it: the first half of the battle is over. I can run this script on every platform, compare results to understand what order our globals are actually initialized in in practice, then slowly migrate this code out of global initializers and into main where it can be deterministic and explicit.

Future Work

It occurred to me sometime after implementing this that we could make a general purpose profiling hook that exposed some public symbols (dllexport’ed if you speak Windows), and allowed a plugin module to hook into this dynamically. This plugin module could filter addresses using whatever arbitrary logic that it was interested in. One interesting use case I came up for this is that it could look up the debug information, check if the current address maps to the constructor of a function local static, and write out the address if so. This effectively allows us to gain a deeper understanding of the order in which our lazy statics are initialized. The possibilities are endless here.

Further Reading

If you’re interested in this kind of thing, I’ve collected a couple of my favorite references for this kind of topic.
  1. Various: The C++ Language Standard
  2. Matt Godbolt: The Bits Between the Bits: How We Get to main()
  3. Ryan O’Neill: Learning Linux Binary Analysis
  4. Linkers and Loaders: John R. Levine
Neither Roblox Corporation nor this blog endorses or supports any company or service. Also, no guarantees or promises are made regarding the accuracy, reliability or completeness of the information contained in this blog.
submitted by jaydenweez to u/jaydenweez [link] [comments]

Forex and Binary Options Advanced Trading Strategy Indicator Binary Options MetaTrader 4/5 Auto Connector: How To Setup ... Binary Options Indicator Live!! Why Are Indicators Irrelevant When Trading Binaries on NADEX?  #TeamAlliance HOW TO CREATE BOT BINARY.COM USING EMACROSS Exponential Moving Average Smart Indicators JB88 2019 MAGIC INDICATORS - NEVER LOSE in options trading - TRY TO ... Using Leading Indicators for Trading OTM Binaries - YouTube TRADING WITH INDICATORS for beginners - Simple and fast ... How to Use Divergence as a Leading Indicator for Trading Binary Options Binary Options Indicator - Best Binary Indicators For MT4?

The leading binary options brokers will all offer binaries on Cryptocurrencies including Bitcoin, Ethereum and Litecoin. As a derivative, traders will not “own” any cryptocurrency, they will purely be speculating on the price. This does mean however, there is no need for a Crypto wallet or crypto account. A high reading above 70 alerts binary options traders to look for reversal signals in the market and to get ready to sell binary options, whereas a reading below 30 suggests the following. As all oscillators are leading indicators, they have the ability to pre-empt what the market will do. Unlike so-called “lagging indicators”, such as moving averages which tend to provide signals after ... Leading South African Crypto Exchange Raises $3.4M Simple Non-Indicator Binary Strategy «4 Twins» (5min, 60sec) The strategy is quite simple best indicators for 5 min binary options , so not only professional traders can use it, but beginners as well. From the example, you can see two signals. The timeframes …. There are many binary options companies which. Scientifically proven the how to ... This interactive webinar featuring the founder of Traders Help Desk, Gail Mercer, reveals the power of using leading indicators that project where price will move in the future and allows traders to decrease their risk by utilizing Out of the Money binary options. Some of the topics covered will be: • How to use the Stochastics to identify overbought and oversold areas • How Volatility indicators and binary options are a great combination. They can create simple but highly profitable trading strategies. What is even better: two of the strategies which we will teach you can win you a trade without requiring you to predict the direction in which the market will move – trading could not be simpler. Binary Options, as with many forms of trading, has many unique words and phrases that may not be familiar to investors new to this form of investment. Some terms are obscure enough to have escaped even seasoned traders. Here we have tried to collate as many of those binary option terms as possible, and listed them in our glossary alphabetically. If you are looking for a particular term and it ... Our Preferred Binary Options Broker. We currently trade at This Trading Platform (allowing you to trade Forex, CFDs, and crypto currencies). After testing several Binary Options and CFD platforms we find this one to be the most suitable for us. What made the difference is a unique feature that allow us to watch and copy the strategies and trades of the best performing traders on the platform ... An options trader should select the indicators best suited to his or her trading style and strategy, after carefully examining the mathematical dependencies and calculations. Take the Next Step to ... Technical indicators suitable for binary options trading should incorporate the above factors. One can take a binary option position based on spotting continued momentum or trend reversal patterns ... Volatility Indicators: These indicators measure the strength of a movement, which helps traders to make a variety of predictions, especially for binary options types that use target prices, for example one touch options, boundary options, or ladder options. Examples: Average true range (ATR), Bollinger Bands (BB), Donchian channel, Keltner channel, CBOE, Market Volatility Index (VIX), Standard ...

[index] [16247] [4375] [22308] [15441] [8662] [21289] [25303] [27494] [16383] [28114]

Forex and Binary Options Advanced Trading Strategy Indicator

Get the only 100% non repaint indicator on the web at: [email protected] website: Binary options strategy. In this vide... This binary option indicator uses a combination of technical indicators to identify some of the highest risk-reward trades. These tend to come as counter trend moves or reversals after an extreme ... MAGIC INDICATORS - NEVER LOSE in options trading - TRY TO BELIEVE GET FREE SIGNAL HERE Find Out Top Post (pinned post) and Visit SIGNAL... Free practice account: This interactive webinar featuring the founder of Traders Help ... Hello everybody! Hope you are having a great day. Today we bring you a new tutorial video. This time, we show you how to setup and use the Auto Connector our... in this Video, I use EMA (Exponential Moving Average) indicator. the red line is ema 21 period, and the yellow line is ema 5 period. now we want to create the with these 2 indicators. This binary options indicator will give you the edge in the financial markets, It is unique and accurate. This binary options trading strategy is all you need to succeed in binary options. The ... Gail Mercer, founder of Traders Help Desk explains how to use price divergence, one of the more popular technical indicators, in her trading analysis with binary options Nadex is the first and ... Strategy for beginners - strategy that works - binary options strategy ★ BEST BROKERS - [Trusted binary options brokers] ★ DEMO ... Why Are Indicators Irrelevant When Trading Binaries on ... learn how to apply market SKILL to Binary Options on the NADEX Trading Platform and generate income at your own pace. SIGN UP NOW for ...