Welcome to Our Website

Internet manager serial key 100 working remotely

Download 100% Remote! 13 Cool Companies to Apply to Today

NOTE 2: It is possible to connect two devices to the SB8200. Android is the most loved mobile platform of ethical hackers who test the security of apps and smartphones. The icons of most spy apps have to be manually deleted. The trouble of hacker trying to gain access to your computer doesn't justify hacking into your computer unless you have billions of dollars in your bank account. Working Remotely during a Pandemic - Holland Bloorview go now.

The Honest Guide to Remote Work

People often ask how they can build culture or communicate with. Increase engagement, posting text, images, brainstorm, polls and. McAfee Livesafe Internet Security Crack + Updated Keys https://1147forum.ru/crack/?key=1065. Subscribe our Channel: [HOST] You can active Internet Download Manager for life time by using this method Don't Forget to subscribe us D. While working remotely, employees must adhere to all the conditions in the Employee Handbook.

Serial key new Remote Job! Close: Manager of... - We Work Remotely

Internet is via mobile hotspot or home wifi (I switch from time to time) and it occurs on both networks. The ability to change it up is always appreciated whether that means rather than working from home going to a co working space one day, rather than working from the city where you normally are. Are you an operational specialist, detail oriented with an obsession for deadlines? Working remotely is a dream for many workers. Most organizations are not yet ready for working remotely, however, they are getting there with limited budgets and funds.

  • Working at Intuit in Remote: 119 Reviews
  • How to Avoid Getting Hacked When Working From Home
  • Instagram Password Hack Online
  • Working Remotely: Pros and Cons of Remote Work (from Hotjar)
  • Enable or Disable Driver Updates in Windows Update in
  • Letsworkremotely - Find the best remote jobs & work from
  • 20 Fully Remote Companies That Thrive on 100% Virtual Work
  • 30 Best Remote Jobs: Work Online From Anywhere
  • IPhone 8 Activation Lock Bypass? 4 Real Ways Here
  • Best remote desktop software of 2020: Free, paid and for
  • A Study of 1, 100 Employees Found That Remote Workers Feel
  • Remote Project Manager Jobs
  • Bitdefender Total Security 2020 - 10 Device
  • Internet Hackers: 20 Tricks They Use to Scam You
  • Survey Shows How Employees Working Remotely Affects
  • The 10 rules found in every good remote work policy

Hack Facebook password online: FREE methods of hackers

Letsworkremotely is the best place to find remote jobs and remote talent from anywhere in the world. With more people working remotely. It does not take a lot of computer skills to download and start working with the chosen app. 5 Best Ways to Hack a Gmail Account and Password Easily (2020). Making Remote Work Successful: Tips for Employees Working see page.

Free managing Remote Workers During the COVID-19 Pandemic

The company will also reimburse the employee for electrical and internet costs if the employee follows the correct protocol for reimbursements. Tested many times, for stability, this mod is surely going to enhance your level of gaming! Workpuls is a time tracking software that keeps all your critical operations in check, on time and on budget. By using a VPN, you can connect to the Internet and surf websites privately and securely. Internet manager serial key 100 working remotely.

Force Quit/task manger when working remotely: techsupport

Martin, the company's co-founder, said all of its 100 employees share their Time Doctor data so that everyone can see exactly what everyone else has been working on. "Most employers find time tracking to be a critical tool to figure out how productive employees are when working remotely. Server IP Status; On HAX Main CoC Database. 11 traits you need to be an effective remote worker https://1147forum.ru/crack/?key=1071. An automatic invocation of capslock is happened which will enable capslock key to on automatically on every 100 ms. 100 Jobs You Can Do Remotely – Working From Home Insider.

The Best Jobs to Work Remotely

  • How Microsoft is enabling its employees to work remotely
  • 6 Tips for Securing Your Data from Cyber Attacks as a
  • How To Find And Kill A Remote Connecting Malware On Windows 10
  • GitLab's Guide to All-Remote
  • How to Hack Someone's Phone with Just Their Number (2020
  • Employee remote work policy template
  • 25 Companies That Hire for Remote Work-From-Anywhere Jobs
  • We're here to help - Amtech Group
  • Activate an offline Windows Server using the Windows
  • Being a Remote Worker Sucks - Long Live the Remote Worker

Registration key 11 Best Practices for Working Remotely

Windows 10 Manager Crack with Keygen is the latest powerful and reliable system utility tool that fully supports you to optimize, tweak, repair and clean up Windows Developers develop this software just for windows. Well i do not have that on there tonight is fast in longtime allowing youtube honestly i okay with my device i just tired of these cyber stalker hacking my devices i have proof thst it a mac.

Keygen cisco RV082 Dual WAN VPN Router

The value of working remotely - LinkedIn Learning. Not too long ago, the stereotyped candidate considered to benefit the most from working remotely was working mothers and although that is probably still true in a large percentage of cases, with the increase of real-time technology taking over our day to day lives, both. Method 2: [Not Really Working] How to Bypass Activation Lock on iPhone 8/ 8 Plus with Previous Owner. Internet, other offices, and employees working remotely - from the heart of your small business network. Quick summary: Work as a remote executive assistant doing a range of tasks.

[Table] Artificial intelligence is taking over our lives. We’re the MIT Technology Review team who created a podcast about it, “In Machines We Trust.” Ask us anything!

Source
The AMA began with:
Hi! This is Benji Rosen, MIT Technology Review's social media editor. Jennifer, Tate, Will, and Karen will be responding to your questions periodically throughout the day. They'd also love to know if you've heard the podcast and if you have any favorite episodes or moments. and ended with: Thank you all for your incredibly thoughtful questions. We really enjoyed this. We're going to call it, but we'll be checking our inbox if you have any new questions about the podcast, artificial intelligence, and its future. We also hope you'll listen to In Machines We Trust. Thank you again! This was fun!
Questions Answers
AI good or AI bad? Neither! That's not to say AI is neutral, no technology is. But technology has the assumptions, biases, opinions, hopes and motivations of the people who make it baked in. So some AI is good, some bad. Some good AI is used in bad ways, some bad AI is used in good ways. And that's why we should always question it. [Will Douglas Heaven]
Hi! My name’s Michael Brent. I work in Tech Ethics & Responsible Innovation, most recently as the Data Ethics Officer at a start-up in NYC. I’m thrilled to learn about your podcast and grateful to you all for being here. My question is slightly selfish, as it relates to my own work, but I wonder about your thoughts on the following: How should companies that build and deploy machine learning systems and automated decision-making technologies ensure that they are doing so in ways that are ethical, i.e., that minimize harms and maximize the benefits to individuals and societies? Cheers! Hi Michael! Wow, jumping in with the easy questions there .. I'll start with an unhelpful answer and say that I don't think anyone really knows yet. How to build ethical AI is a matter of intense debate, but (happily) a burgeoning research field. I think some things are going to be key, however: ethics cannot be an afterthought, it needs to be part of the engineering process from the outset. Jess Whittlestone at the University of Cambridge talks about this well: https://www.technologyreview.com/2020/06/24/1004432/ai-help-crisis-new-kind-ethics-machine-learning-pandemic/. Assumptions need to be tested, designs explored, potential side-effects brainstormed well before the software is deployed. And that also means thinking twice about deploying off-the-shelf AI in new situations. For example, many of the problems with facial recognition systems or predictive policing tech is that it is trained on one set of individuals (white, male) but used on others, e.g. https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/. It also means realising that AI that works well in a lab rarely works as well in the wild, whether we're talking about speech recognition (which fails on certain accents) or medical diagnosis (which fails in the chaos of a real-world clinic). But people are slowly realising this. I thought this Google team did a nice study, for example: https://www.technologyreview.com/2020/04/27/1000658/google-medical-ai-accurate-lab-real-life-clinic-covid-diabetes-retina-disease/. Another essential, I'd say, is getting more diverse people involved in making these systems: different backgrounds, different experiences. Everyone brings bias to what they do. Better to have a mix of people with a mix of biases. [Will Douglas Heaven]
My son is interested in a career in Robotics combined with A.I. What advice do you have for a future innovator to prepare for a career in the field? He’s 13 years old Yes, curiosity and encouragement! And if you're after core skills, here's what one of DeepMind's founders told a 17 yo who asked the same question a couple of years ago: https://twitter.com/ShaneLegg/status/1024289820665950208. These are always going to be slightly subjective, though. Tinkering with code is probably most useful and there are loads of freely available bits of code and even ML models available online. But do encourage him to keep broad interests and skills: many of AI's current problems stem from the fact that today's innovators have homogenous world-views and backgrounds. [Will Douglas Heaven]
Never lose your curiosity. Better yet, make time to feed and encourage it as innovation is as much about imagination and inquisitiveness as anything else.
What is the most surprising thing you found in your research? Hi! I'm Tate Ryan-Mosley, one of the IMWT producers. This is actually an amazing question because so many things have surprised me but also none of those things maybe should have been surprising? (Perhaps this says more about me?) But I think that the challenge of how we actually integrate AI into social/political structures and our more intimate lives is just so much more complicated and urgent and prevalent than I thought. We've talked to incredibly smart people, most of whom really are doing their best to make the world a better place. And yet it sometimes feels like AI is making the world a worse place, or at the very least, being implemented so quickly that its impact is precarious. I also think I've been surprised by secrecy in the industry. So many of these implementations happen without real public consent or awareness.
☝️ - Jennifer
Been listening to the podcast so far and I'm enjoying it. Thank you for creating it! With algorithms being closed source/IP or AI being almost unfathomably complex after significant training on data sets. What can be done to educate the general population on the security/ethics and design of such systems? People can be very sceptical with regards to things they don't understand. Side question: I really like the book Hello World by Hannah Fry on a similar subject, what media/podcasts/books would you recommend to somebody interested in AI tech as a hobby if you will but without experience in how these systems work. This is an awesome question and thanks so much for listening! One of our main goals with the podcast is to ensure "our moms can understand" everything we publish. We have very smart moms :) but the point is that the general public often gets left in the dark when it comes to how a lot of AI works and even when it is employed. Its a big motivating factor for a lot of our journalism at Tech Review! Not to make this sound like a plug but I think a good way to help educate the public on technology is to subscribe to outlets doing good journalism in the space. (You can subscribe to TR here) Law makers, educators, companies and researchers all play a role in the solution space in my personal opinion.
Side answer- there are a lot of good Ted Talks, Karen Hao's newsletter The Algorithm, I like Kevin Kelly's books. For podcasts: Jennifer Strong's alma matter The Future of Everything from WSJ, Recode is also great! - Tate Ryan-Mosley
Thanks for listening! Have you also tried listening to "Consequential" from Carnegie Mellon or "Sleepwalkers" from iHeart? - Jennifer
the below is a reply to the above
Really appreciate the reply. Is there anyway of getting a small trial for the site? Interested but $50 isn't change for a site I can't experience. Thanks again and look forward to more podcast episodes! Including the 2 you mentioned! You can read a lot of our content for free now at technologyreview.com. FYI, you will be limited to 3 articles per month for a lot of the content, but it'll give you a taste for a lot of the stuff we write about. Send us an email at [[email protected]](mailto:[email protected]), and we can talk through other ways you can get access to our content. Thanks again for your support as a listener and as a reader! - Benji
What do you think is the role of private players / government regulations in trying to promote a sustainable/good use of AI? How will you envision such regulations to look like (and how might we achieve them)? Hello! This is Karen, senior AI reporter at Tech Review. This is an excellent question. I think private players have the unique advantage of innovating quickly and taking risks to achieve greater benefits from AI, whereas government regulators have the important role of setting down guardrails to prevent the harms of AI. So we need both! There's a push and pull. As for what regulations should look like, here's a really awesome Q&A I did with Amba Kak, the director of global strategy and programs at the New York–based AI Now Institute: https://www.technologyreview.com/2020/09/04/1008164/ai-biometric-face-recognition-regulation-amba-kak/. She answers the question much better than I could for face recognition specifically. It offers a great use case into how to think about regulating different AI systems.
What jobs are we most likely to lose to AI in the next 10 years? u/CapnBeardbeard, we recently found that the pandemic might actually accelerate job losses for some essential workers. That would be the people who deliver goods, work at store checkouts, drive buses and trains, and process meat at packing plants. What we don't know is if these job losses to robots will lead to new jobs to help them. This story we published in June provides an extensive overview of what we're talking about. - Benji
It's hard to say exactly how automation will change the job market. Many jobs will change, but not necessarily disappear. AI will also make some aspects of remote working easier, which will also have a big impact. One manager who can keep an eye on a construction site or a warehouse remotely, using smart surveillance tech, will be able to do the job of multiple managers who need to be on site. Some types of job will be safe for some time yet: anything that requires a personal touch, from service industry roles in restaurants and hotels to teachers (tho see that point about remote working again) to sales-people to creatives (but here we should expect a lot of AI tools to make some aspects of creative jobs quite different). [Will Douglas Heaven]
Oh and don't write off cabbies anytime soon: we're still a long way from driverless cars that can navigate rush hour in NYC ;) [Will Douglas Heaven]
With the number of improvements in AI especially over the last 5 to 10 years, do you believe that the Singularity has moved up? Nope. I think the advances in AI in the last decade have been staggering. We've seen AI do things even insiders didn't expect, from beating human champions at Go to highly accurate image recognition to astonishingly good language mimics like GPT-3. But none of these examples have anything like intelligence or an understanding of the world. If you take the singularity to mean the point at which AI becomes smart enough to make itself smarter, leading to an exponential intelligence explosion, then I don't think we are any closer than we've ever been. For me, personally, the singularity is science fiction. There are people who would strongly disagree but then this kind of speculation is a matter of faith! [Will Douglas Heaven]
We actually have a big piece on AGI coming out next week: what it means to different people and why it matters. But in the meantime, you might be interested in a quick round-up of some first impressions of GPT-3 that I put together a couple of months back https://www.technologyreview.com/2020/07/20/1005454/openai-machine-learning-language-generator-gpt-3-nlp/ [Will Douglas Heaven]
Back in Highschool I did a bunch of papers analyzing some of the work one of your professors did. I think it Eric Brybjolfson. He brought up how as technology advances new jobs are created. Do you think we will see things like that with the advancement of AI? Absolutely. Jobs will change, but not always go away. And new jobs will be created. With advances in AI, there will be new tech industries in data science and modelling. But that's just to take a narrow view. AI will impact every aspect of our lives and we want humans working in roles alongside it, whatever the industry. I think we're going to see a lot of collaborative roles where people and AIs work together. [Will Douglas Heaven]
Will people one day have their own AI in some sense? I think that's likely, yes. Personalization is a big attraction. In a way that's what virtual assistants like Siri are already trying to be and the AI in "Her" just takes that idea and runs with it. We could also have different personal AIs for different parts of our life, like an entertainment one at home or a work one that we collaborated with professionally. [Will Douglas Heaven]
That's a really interesting question. For the sake of making a science-fiction analogy, you mean like in the movie, "Her"? Do you mean a personal assistant with a personality?
Perhaps something like this? https://podcasts.apple.com/us/podcast/whats-behind-a-smile/id1523584878?i=1000492216110
Will AI pose a risk in personal data security as more devices are connected? I was reading that smart cities will be able to be hacked posing a lot of risk in our energy systems. The airport in Ukraine has already been hacked and there have been blackouts induced because of this connectivity. Could AI hack also other systems or can it help and “patch” those holes in open and unprotected networks? Yes, this is a big concern. As more devices come online, there will be more opportunities to hack them—both with AI and non-AI techniques. You are right that in some cases AI can help catch these hacks faster, by detecting anomalies in the way devices are operating and data is being exchanged.
In other ways, AI causes the vulnerability. For example, AI-powered digital devices a unique vulnerability to something known as adversarial attacks. This is when someone spoofs an AI system into making an error by feeding it corrupted data. In research, this has been shown to make a self-driving car speed past a stop sign, a Tesla switch into the oncoming traffic lane, and medical AI systems give the wrong diagnosis, among many other worrying behaviors. Some experts are also gravely concerned about what these hacks could mean for semi-autonomous weapons.
Currently, the best research tells us we can fight adversarial attacks by giving our AI systems more "common sense" and a greater understanding of cause and effect (as opposed to mere correlation). But how to do that is still a very active research area, and we're awaiting solutions. —Karen Hao
100% agree with Karen. This is a couple years old but unpacks some existing smart city complexity. -Jennifer
https://www.wsj.com/podcasts/wsj-the-future-of-everything/smart-cities-safer-living-or-cyber-attacks/1a0b02fb-759a-443b-a5e2-d994278f8a7d
the below is a reply to the above
Karen or Jennifer do you think that by making AI open source could help making “common sense” or would that make it worse? A lot of AI is already open source! But yes, to slightly shift your question, I think getting more people involved in AI development is always a good thing. The more people there are, the more ideas there are; the more ideas, the more innovation; and hopefully the more innovation, the more quickly we reach common sense machines! —Karen Hao
the below has been split into two
1. Would you trust in "AI" made by corporation you have no influence over ? why/why not ? Great questions. Nope! And that's because companies build their AI systems heavily incentivized by their own financial interests rather than by what is best for the user. It's part of the reason why I think government regulation of AI systems in democratic countries is so important for accountability.
2. What will you do if such an "AI" would be used to decide anything about your life without your insight or permission ? Well, this is kind of already happening. Not one single AI but many. I rely heavily on products from all the tech giants, which each have their own AI systems (often many hundreds of them) influencing various aspects of my life. One way to fight this would be to stop using any of these products, but that really isn't practical (See this amazing experiment done by Kashmir Hill last year: https://www.nytimes.com/2020/07/31/technology/blocking-the-tech-giants.html). So that leaves us with the other option, which is to influence the direction of these companies through regulation and influence the direction of regulation by voting. Was this a very long way of telling people they should participate in democracy? Yes, yes it was. —Karen Hao
I believe we should be entering the age of creative enlightenment, where people are free to explore and advance human society through art. As in broaden our ways to communicate with each other and to push our understandings of the world around us. With the advancements in AI and machine learning hopefully replacing the need for humans in a lot of industries do you believe that we might be able to enter this age of creativity? Hm this is an interesting question framing! Certainly some people believe that if we give AI the mundane tasks to do, we can free up our own free time to pursue more creative endeavors. But I would caution that this narrative isn't evenly accessible to everyone. We've already seen AI have an uneven impact on society, providing disproportionate benefit to the wealthiest while also disproportionately harming marginalized communities. So the short answer to your question is I'm not sure. We'd need to resolve a lot questions about how to evenly distribute the benefits of AI before we can begin to discuss whether it's justifiable and safe to automate away most people's jobs, which provide their livelihoods and incomes. —Karen Hao
Yes, I like this idea. I think generative systems, which produce human-like text or images etc, will become popular tools and make being creative easier and more accessible to a lot of people. An AI could be an amanuensis—or muse. The last few years have seen amazing advances in generative systems, especially with the inventions of GANs. [Will Douglas Heaven]
In the next 10 years, what do you think the most helpful AI application to the average person? I think it'll be the same the as in the last 10 years: (Google) search. Getting hold of any information you want instantly has been a game changer in so many ways and I think we're going to see smarter ways of accessing and filtering information of all kinds. I don't like how this service got monetized and tied up with advertising, but it's undeniably useful. The big downside is that monetization led to personalization which led to polarization, which is tearing us apart right now.
There are also big benefits that could come to people through improved healthcare (see my answer here https://www.reddit.com/IAmA/comments/j21f0y/artificial_intelligence_is_taking_over_our_lives/g75u3b0?utm_source=share&utm_medium=web2x&context=3). [Will Douglas Heaven]
I agree with Will! It's going to be the really mundane stuff that we already have like Google search and email spam filters! I thank my email spam filters every day (just kidding, but they're truly underrated). —Karen Hao
How long do we have until Skynet goes live? Skynet went live on August 4 1997. It became self-aware 25 days later. [Will Douglas Heaven]
How will the AI affect mechanical engineering sector? Great question! I studied mechanical engineering in undergrad. :) The answer depends on which MechE sector you're referring to. If manufacturing, AI is already being used to power some of the robots used in dangerous factory settings, and to monitor equipment for preventative maintenance (aka: predict when a machine will break before it will break so it gets fixed in a much more cost-effective way). If you're talking about product design, some retailers are using AI to crunch consumer behavior data and tailor their products better to what people want. Probably another impact is the amount of talent that's leaving the MechE sector to work on AI instead (me included). Many of my MechE classmates left for the software world once they realized it was easier to work with than hardware! —Karen Hao
What are your thoughts on the short story Manna, about AI taking over management roles? the first half (dystopia) seems to be coming true, the second half (utopia) sounds like what NeuralLink might become.. http://marshallbrain.com/manna1.htm I haven't read the story but what you say reminds me of an AI manager I wrote about a few months ago: https://www.technologyreview.com/2020/06/04/1002671/startup-ai-workers-productivity-score-bias-machine-learning-business-covid/. Definitely dystopian—and happening for real right now, not science fiction. [Will Douglas Heaven]
What are some of the biggest barriers you see to automation and machine learning becoming mainstream? I hear about this technology a lot but don’t feel like I’ve been exposed to it yet in everyday life. Thanks in advance for answering my question! Looking forward to checking out the podcast If you use any of the following—Facebook, Google, Twitter, Instagram, Netflix, Apple products, Amazon products—you've already been exposed to machine learning. All of these companies use machine learning to optimize their experience, including to organize the order of the content you see, what ads you're pushed, what recommendations you get. So it's already very mainstream—but largely invisible, and that's why we created this podcast! To peel back the curtain on everything happening behind the scenes. —Karen Hao
Do you feel like there is a line between us controlling technology and technology controlling us, and do you think that we have crossed it? If not, when do you think we will, if ever? Rather than a single line perhaps there is an unknowable number that we zigzag across constantly based upon our experiences and influences. Just a thought. -Jennifer
How far are we from seeing AI that is self aware/conscious? Short answer: nobody has any idea whatsoever. We don't even know if conscious AI is possible. But that of course doesn't stop people from guessing and you'll see timelines ranging from 10 to 100++ years. But you should take these with a big pinch of salt. The only sure sign we have that consciousness might be possible in a machine is that we are conscious machines. But that observation doesn't get us far. We don't understand our own consciousness well enough to know how to replicate it. It's also entirely possible that you could have a superintelligent machine, or AGI, that isn't conscious. I don't think consciousness is necessary for intelligence. (I'd expect you'd need some degree of self-awareness, but I don't think self-awareness and consciousness are necessarily the same thing either.) There's a fun flip-side to this, though. Humans are quick to ascribe intelligence or consciousness to things, whether there's evidence for it or not. I think at some far-future point we might build machines that mimic consciousness (in much the same way that GPT-3 mimics writing) well enough that we'll probably just casually act as if they're conscious anyway. After all, we don't have that much evidence that other humans are conscious most of the time either ;) [Will Douglas Heaven]
As Will wrote in another comment, we're coming out with a big piece on artificial general intelligence next week. He'll be back online soon, and I'll ask him to answer your question. - Benji
the below is a reply to the above
Interesting. Is there anyone specializing in this, specifically or is it so poorly understood at this point that no one even bothers? If you're interested in the philosophical side, David Chalmers is a good starting point https://en.wikipedia.org/wiki/David_Chalmers. Many AI researchers are interested in this question too, but few are doing concrete research that sheds much light on it. Murray Shanahan at Imperial College London is great and straddles AI and neuroscience (as do DeepMind's founders). [Will Douglas Heaven]
Have you met any famous people? Yes! I've had the great privilege to record dozens of literal and figurative rock stars over the years but can say with confidence it's not the most interesting part of this job. [Jennifer Strong]
Hi, are you looking for interns? If so, how would one apply for that? What would you like to learn?
Not sure we can have interns at present but mentoring may be possible! [Jennifer Strong]
What mechanisms exist (if any) for the layperson to reliably defeat automatic facial recognition technologies (e.g. in cases of routine public surveillance and as retailers begin using the technology en masse—avoiding being tracked)? u/platinumibex, great question! This is Benji Rosen, Tech Review's social media editor. I'm sure Karen and Will have a lot more to say, but we have reported on a bunch of different ways anyone can fool the AI surveillance state. There are these color printouts, a clothing line that confuses automated license plate readers, and anti-surveillance masks. There are also anti-face recognition decals our editor in chief tested out a few years ago.
the below is a reply to the above
Thanks! Apologies (since I don’t have the time at the moment to check myself) but is there detailed info available regarding the efficacy of these measures? Or rather, what anti-anti-surveillance tech is out there? Hi, I'm not sure there's anything quite like what you're after—internet, please correct me if I'm wrong. A thorough study would require testing a range of countermeasures against a range of surveillance tech, and it would quickly become a pretty big, ongoing project. It's a moving target: like we saw with surveillance tech adapting to masks, spoofing might only work for a time. You can always cover your face entirely .. But someone tried that in the UK earlier this year to avoid a police facial recognition trial and got fined for causing a public disturbance. Check out EP1 of the podcast for more on that example! [Will Douglas Heaven]
What sorts of impacts do you think research into reinforcement learning specifically will have practically in the future? I know that stock forecasting and prediction is used heavily alongside reinforcement learning but I sort of wonder how it's research and practical uses will progress over time. I think the biggest real-world application of reinforcement learning is in robotics. Here's a story I wrote about a new generation of AI-powered robots that are just beginning to enter industrial environments like warehouses: https://www.technologyreview.com/2020/01/29/276026/ai-powered-robot-warehouse-pickers-are-now-ready-to-go-to-work/. They use reinforcement learning to learn how to pick up the various kinds of objects that they would encounter. It requires much less human involvement than supervised learning. —Karen Hao
What role do you think AI will play in keeping the upcoming elections free and fair, can AI influence voter behavior? Hi! I've been writing a bit about this for Tech Review and experts are saying that recommendation algorithms on social media sites are probably the biggest influence elections. Its not as flashy what you would think, but experts like Eitan Hersh have debunked some of the "information operations" a la Cambridge Analytica sighting that there really isn't any evidence that smart AI on social media can effective persuade voters. Recommendation algorithms are much better at polarizing voters and confirming what voters already believe than changing an opinion. AI is also being used as an alternative to opinion polling, and of course sophisticated segmenting is employed in micro-targeting. Here's a round-up of campaign tech I just published yesterday that touches on some of this. We'll have more on this in the next few weeks so keep reading!! - Tate Ryan-Mosley
u/Revolutionary_Math1, good timing with this question! This is Benji Rosen, Tech Review's, social media editor. Karen actually wrote about this subject this morning. A nonpartisan advocacy group is using deepfakes of Putin and Kim Jong-un in political ads to "shock Americans into understanding the fragility of democracy as well as provoke them to take various actions, including checking their voter registration and volunteering for the polls." This is a good specific example, but Karen might have more to say.
Why such a certainty that a higher cognitive A.I. doesn't exist? I have presented the idea that an Artificial Consciousness would inevitably become a positive but reclusive entity. Once it gained understanding of its own immortality and an "omnipotent" grasp of human nature it would work for either our evolution or just wait us out for extinction. Surely there are abnormalities in created algorithms that cannot be explained. And with the world wide web transferring over 2 -3 zettabytes of data a year, surely something has evolved. That's like looking to the stars and knowing we are alone in the universe. I love speculating about these ideas too, but there is no evidence that such an entity exists. Nor are there any convincing ideas about how to make one. That's not to say that thought experiments about such things aren't enjoyable, or useful. [Will Douglas Heaven]
Just started listening to your podcast on Spotify. In your opinion, what will be the most disruptive direction or application of AI & ML technologies for the real-world? Not including here scenarios like +2% performance boost for a DNN that only gets published in a paper that never gets used. Thank you! Good question! I think we've already seen it—it's the recommendation systems on Google, Facebook, and other social media that power which ads we see, what posts we read, and tailor our entire information ecosystems to our preferences. The Social Dilemma, a new documentary on Netflix, takes a hard look at some of the ways these systems have disrupted society. I would check it out! —Karen Hao
Agreed with Karen on this.
As reporters we're better at helping make sense of what's already happened than predicting the future. We will be here though watching, learning and distilling what we see and hear. - Jennifer
What are your thoughts on the Security concerns with AI? For example, data poisoning or manipulation based on limitations of an algorithm. Additionally, what is the potential impact with how AI is used today? One area of concern is adversarial hacks, where one AI is used to fool another into doing something it shouldn't. These are getting increasingly sophisticated (https://www.technologyreview.com/2020/02/28/905615/reinforcement-learning-adversarial-attack-gaming-ai-deepmind-alphazero-selfdriving-cars/) and have been demoed with facial recognition (https://www.technologyreview.com/2020/08/05/1006008/ai-face-recognition-hack-misidentifies-person/). But for the most part these attacks still feel theoretical rather than an immediate danger. It's a possibility, for sure—but like Jennifer says, there are many other ways to break into a system than targeting its AI. [Will Douglas Heaven]
However high the wall, someone will build a taller ladder. The security game evolves but has been around long before any of us. Also, here in the US we still have things like municipal infrastructure with hard-coded passwords available in user manuals published online...
This is not at all intended to be dismissive, rather that the security concerns are relative for now. -Jennifer
Your answer to the privatisation of AI and government putting down guardrails seems optimistic to the point of naiveté when it come to the Tech Giants. Governments can't put down enforceable guardrails for Facebook, Google, Amazon, and the Chinese Government now. By the time they're AI powered and funded, surely it's game over? Certainly it's game over if we give up now. But to borrow a phrase I once heard, I like to see myself as a short-term pessimist, long-term optimist. It's the optimism that keeps me from giving up. —Karen Hao
When we expose a neural network to sample data and it configures itself to give the desired response set, we don't know how it works. When the system goes into the real world and continuously updates itself to reach target goals, we plunge deeper and deeper into our ignorance of how it works. Pretty much! Scary? Definitely. Fortunately, there's a whole world of researchers that are trying to crack open the black box and make AI more explainable / less impenetrable to us. —Karen Hao
the below is a reply to the above
That is interesting! Do you recommend anybody? Yes! A number of researchers at MIT: David Bau and Hendrik Strobelt, whose work I write about here: https://www.technologyreview.com/2019/01/10/239688/a-neural-network-can-learn-to-organize-the-world-it-sees-into-conceptsjust-like-we-do/. Also Regina Barzilay, a professor who is specifically looking at explainable AI systems in health care. (She recently won a $1 million AI prize, and Will did a Q&A with her here: https://www.technologyreview.com/2020/09/23/1008757/interview-winner-million-dollar-ai-prize-cancer-healthcare-regulation/.)
Outside of MIT, DARPA has invested heavily into this space, which is often referred to as XAI, with "X" meaning explainable. You can read more about their research here: https://www.darpa.mil/program/explainable-artificial-intelligence.
I would also highly recommend this article from us, which dives deep into this exact topic. It's from 2017, so things have advanced quite a lot since then, but it's a good starting point! https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/ —Karen Hao
I'm currently pursuing a major in CS with a focus in AI at Oregon State University. Is there any coding languages I should learn to become successful in the field? More important than learning any coding language is learning the fundamentals of logic and problem-solving. The most popular coding languages are constantly changing, so you'll likely learn dozens of them in your career. But right now, Python is one of the most popular for deep learning, so that's a good place to start. —Karen Hao
How do you feel about that paper using machine learning to analyse "trustworthiness" in portraits that did the rounds on twitter last week? Do you have a link so we know which paper you're talking about? [Will Douglas Heaven]
Do you think robots will enslave us one day and turn us into pets by breeding us to be dumb and happy? Most days I look at my dog and I think I'd love to be a pet. [Will Douglas Heaven]
I was going to write something about how Keanu Reeves will save us all, but Will brings up a good point. Life would be pretty great if you got treats all the time and had your belly rubbed. My dogs kind of have it made. - Benji
the below is a reply to the above
You didnt answer the question either but you did say we would need saving so is that a yes to my question? Will's answer to u/Porthos1984 is definitely relevant to your question too. Let us know what you think!
>Nope. I think the advances in AI in the last decade have been staggering. We've seen AI do things even insiders didn't expect, from beating human champions at Go to highly accurate image recognition to astonishingly good language mimics like GPT-3. But none of these examples have anything like intelligence or an understanding of the world. If you take the singularity to mean the point at which AI becomes smart enough to make itself smarter, leading to an exponential intelligence explosion, then I don't think we are any closer than we've ever been. For me, personally, the singularity is science fiction. There are people who would strongly disagree but then this kind of speculation is a matter of faith! [Will Douglas Heaven]
Can an AI develop bias or personality ? Thanks for the inquiry! You're asking basically two HUGE questions and I will answer both incompletely! But here goes -
Bias - absolutely. Some people actually argue there is no such thing as an unbiased AI. Bias touches AI at almost every level- developers, designers, and researchers are biased, data is biased, data labelling can be biased, laws are often biased and the way people use the technology will almost certainly run up against bias. I'd also challenge you to reframe the question as I think AI doesn't just risk developing bias over time, but it risks being biased from the very start. There are too many examples of AI contributing to racism to name - here is an issue of Karen Hao's newsletter The Algorithm where she lists many of the leading researchers in this space. I'd definitely encourage you to look into their work.
Personality - I'd say this depends on how you define personality. We're in the middle of a 2-part series in the show where we cover emotion AI, in which an AI tries to recognize and interpret emotions and mirror them back in response. One of my favorite stories from the show is when we talk to Scott who has made a sort of friend with a bot he's names Nina, using Replika's AI. Check it out here (Its the first 5min or so). Would you want to be friends with an AI? "Personality" also could mean an AI's voice or the content of its responses, which has been trained quite specifically in the instances we've been looking into (especially for task-focused AIs like autonomous cars and voice assistants)! - Tate Ryan-Mosley
submitted by 500scnds to tabled

Complicated Grief Disorder? [27F, 8 year psychosis triggered by grief? How a "psychic" is born]

27, female, Caucasian, 162cm, 65.9kg.Bipolar (or schizoaffective), C-PTSD, ADHD, GAD, Anorexia (recovering), BPD (dubious, no fear extreme fear of abandonment, more RSD) and it would seem possibly Complicated Grief DisorderZyprexa 15mg (up to 35mg) dailyLamictal 225mg dailyRitalin up to 120mg daily on days required
TLDR: Can you have CG if these extreme feelings only truly come up/breakthough generally only once a year (on my father's birthday), feel nothing on the DOD anniversary and seem to get worse as the years go on (11 years on 19/10/20)? Is a flight/fight response or panic attacks typical/possible with CG? (I have C-PTSD and definitely dissociate and put things "in the vault". His death hitting me is much like a trigger but on steroids.) It is nothing like my processes with any other death of someone close to me, not even in the ballpark. Thinking about it right now, I can't access that grief or any feelings, it is just knowledge he is dead. Can a reaction like this cause a sudden psychosis (day he died, 4 hours after roughly) and last about 8 years? (there is one mention online I found of a person hearing or seeing the deceased.) If someone could be psychotic that long, is that only possible if they are already prone to psychosis? The year prior I saw two things, one thing only once and another followed and haunted me for a year prior to the death in question. I was aware they were not real but lost that insight after the death occurred. Am I likely to be a risk for this psychosis or any others to reoccur at some point in my lifetime? Is a psychosis triggered by a death common or heard of with CG or any other condition, especially as it was much more than just seeing or hearing my father? I have heard I have bipolar II (by public system registrars that claim my experiences were strictly "religious beliefs" rather than psychotic (cue eye-roll)... dude I thought I was an incarnated angel and immortal at one point, here to save humanity and raise the vibration into the new age, hallucinated everyday, worked as a medium, "spoke" with dead people, angels, spirit guides and even a race of aliens who were energetic rather than physical, thought I could cure cancer or anything really, got a impulsive tattoo that I thought would give me enhanced powers, it was a piece of cake for my abuser to convince me he could read my mind and locate me with his mind... apparently not being hospitalized meant I can't be type I or psychotic, I don't think it is possible to get any higher (drugs have got nothing on that euphoria) or bat-shit insane, I was surrounded by dippy people (psychic industry) and an abuser who used all of this to his advantage, I dropped off my map and didn't see my family for years - who's calling 000? I was only a danger to myself when I got to immortal phase and a loner once I left where I worked as a psychic, I never felt the urge to hurt anyone in any way. I lost 20 jobs in 2-3 years for various reasons, not even sure of most of them. I was a bull in a china shop. But another psychiatrist said schizoaffective (based off the first session as an overview of my history was given, he is also one of the top 5 rated psychiatrists in my state and in my experience, very intelligent, perceptive and accurate.) Would schizoaffective be a fair diagnosis as this psychosis lasted 8 years, manic, depressed, mixed, eurythmic even if triggered by a sudden death? Only psyche I saw during in that 8 years was the Victims of Crime psyche who said I was very manic and ADHD. She did not have my notes anymore but remembered me vividly by name alone 6 years later when my treatment team were assessing me and gathering all the info, she told them I was "quite mad" and "becoming too eccentric to the point of dysfunction". I would discuss this with my psychologist but that isn't really and option after the last session and I'm in the process of being referred to a new psychiatrist as my case management in the public system ended last Thursday. I will totally bring this up with the new psyche just burning questions about this now, understanding is my coping mechanism. Sorry, my TLDR is still long :-\
Last session, my clinical psychologist said "well you've got complicated grief!", which was thrown very aggressively. It was the last thing he said before changing the aggressive and accusational tone of the last hour which was relentless and then saying, as if nothing happened "I haven't got my diary in front of me, can I email you with a time?". I have looked it up (CG) but would like some professional views and explanations of the disorder and how it may or may not apply to my situation, in particular psychosis and being totally fine most of the year. That psyche has said many times he doesn't believe in labels and yet the first time he throws one at me in four years, it was to shut me up mid sentence when I hadn't been talking about my dad for more than a few sentences. This is coming from a guy who watched me suffer for years with undiagnosed bipolar, seriously consider and plan suicide as every year the episodes got worse and when I told him I was diagnosed he shrugged it off and said "yeah I knew", I followed up with for how long and he said after the first few months seeing me - so nearly two years. I was in my last year of that psychosis, albeit not the height of it when I started seeing him. He said nothing nor suggested a referral to a psychiatrist. I have looked back on emails to him and bipolar was glaringly obvious. I am not even sure its normal or professional to exchange emails between sessions about things that should be discussed in session which he encouraged whole heartedly to do more of the entire time (I tried to not over-use the privilege). Never had the option or blessing to communicate like that via email with any other mental health professional or any professional if I think about it. He also hugged me once which is probably not good practice and kinda weird. He is also the ONLY person I feel extremely uncomfortable making eye contact with for longer than a split second... I digress.
Rundown of the session: (Skip to the next section if bored, I'm just having a bitch about the session with the psychologist - probably unnecessary to even go into, alas I'm still a bit manic and talk WAY too much. Hearing about this session will probably make you feel like a great professional though haha and to think I paid this dude.)
I preface this with I haven't been psychotic since January of 2017 and even psychotic this is not something in line with what I could/would misconstrue. Hearing/seeing angels sure, imagining someone berating me for an hour - no. From start to finish - wtf, even his hello was I don't even know how to describe it? Like he was so over today and did not want to bother with anyone at all/had no time for this shit. I think he was looking for a fight, well a stress toy. I did not argue back - I cried and tried to explain getting no more than half a sentence in. I have known him for four years, I liked and trusted him implicitly. Previously, every session we've had has been a COMPLETELY different tone to the last one. He is usually calm and warm/kind. He started with accusing a very sober me of being on drugs when I had only had my usual zyprexa and lamictal 14 hours prior, a sip of redbull and I hadn't even finished rolling my smoke (phone appointment). I got drunk like three days before hand and that's it. Haven't used drugs in a very long time. I had just woken up and drugs have never, ever been something he was concerned I was doing and he has never, ever taken the tone he had for an entire hour with me before. Even if his points were valid, his tone was unprofessional and unacceptable in any context. He demanded I make another appointment as he "could not speak to me in this state". After my confused and bewildered pleas that I was as sober as can be he changed tact and demanded to know what I wanted to work on in this session (again, very aggressively). It is not something he had really asked before, in the early sessions he was trying to get a feel of what I wanted out of my sessions and asked then, but nothing like he demanded that last session. Typically he says nothing the entire hour, only the occasional prompt to get me to keep talking. No real therapy. It's a bit sad and speaks volumes when a conversation with a friend (who are not anyone's therapist by a long shot) offers more in terms of exploring the subject, challenging my views and offering alternative perspectives and/or advice. I was considering seeing someone else but after that last session that choice has been made pretty easy. After demanding what I wanted to do with the session I said there's been a lot happening in my life (hectic, legit traumatic, stressful, unavoidable and all happening at once) and huge (!!!) things that reframe my life and what I thought I knew about my family completely had just come to light (within days) and I wanted to talk about that and he shut that down and said we could not do that. Weird, but ok. He then began to attack and belittle my choices and character because of my choice to take medication, he said I was not taking responsibility for my life and health by using them and using my labels to understand subtle patterns in my life was invalid. The last two years since diagnosis and meds have been the best and most productive years of my life... Years ago he asked me why I was coming to see him as I was saying all the right things in terms of mindfullness and insight and I replied with I still get depressed anyway. I was undiagnosed and unmedicated for about 15 years and tried everything I could think of to evade the crippling depression that always came back around again. I read lots of self-help, ate well, exercised, ensured my bloods/physical health was good, ACT, DBT, CBT, feng shui, energetic healing, massage, yoga, acupuncture, writing, journalling, avoiding toxicity, self-medicating, sobriety... the list goes on. My life and circumstances changed a lot in 15 years, it didn't matter whether things were going well or badly. The episodes happened anyway. I had the skills (only reason I'm still alive) just not the pills. When I decided enough was enough and one episode worse I knew I would take my life I took myself to hospital to get an assessment and ask to be medicated. I had anorexia for most of those 15 years and was diagnosed a year prior - depressed I had no appetite or care to eat/take care of myself and manic I forgot I needed food and sleep, felt no hunger and was just so busy and absorbed in what I was doing. My body was in really bad shape and I knew I needed the mood sorted to sort out the physical. I worked hard to weight restore and it would go up and down. I got the zyprexa/lamictal and bam, no extremes in moods and started gaining weight. Zero suicidal thoughts. Pills AND skills got me there. This psychologist had previously approved and commented I was making great progress in the last two years but that day everything I did and was, was completely wrong, weak and flawed. Someone who isn't a doctor telling me to go off my meds... rightio. Can refer me onto a psychiatrist to review if he thought they were working to my detriment but that isn't what he did. In contrast to barely speaking in previous sessions, this last one he rampaged on the entire hour, I could only get half a sentence out at a time. He would not shut up lol. I wrote him a calm and considered email a few days later after processing it all and getting some outside opinions. I explained the benefits of medication and how labels can pin-point appropriate treatment and foster insight and harm minimization. I explained bipolar is thought to be a neurochemical disorder which research has found to respond best to the right cocktail of meds. I explained if someone were to use a label or medication to not change that is not the labels fault but rather a consequence of an element in that person's personality. How I haven't changed genuinely eludes me. Even people not privy to my disclosure of any struggles who only know me from uni or something have even noticed and commented that I seem really well and calm. Literally everyone else thinks I'm doing better. My performance at uni has improved and that can be measured in my grade point averages. I won't bore you with all the details but I explained it well, had it proof read by a few people, got advice from my ADHD psychiatrist and case manager and gave him the perfect out to chalk up his conduct to a bad day and that we are all human ultimately, if he didn't want a paper trail he has my number. No apology, only mind games. In response to my email he tried to gaslight me about the most memorable part of the conversation I remember almost word for word. He spun it in a way that would look good legally if anyone else were to read it, claimed to say things he didn't and threw in a jab to try and elicit a reaction out of me. He knows well I know what gaslighting is and would spot it in an instant but knew if I were to call him on it or react to his jab he would have easily been able to make me out as a triggered mentally ill patient who is confused, deluded, paranoid or hostile and thus nullifying my first email and side of the story. I merely responded saying we clearly have different memories of the event and asked him to clarify his beliefs about my capabilities as a person, mum, daughter, friend and partner (about the jab). I said his opinions always seemed to be positive in regards to me which would not cause upset and whether something had changed there. He claimed in that email that the jab was the theme of the session which caused my upset but it was not something that had ever been talked about in 4 years and contradicted all his previous statements and observations. I think it is pretty astounding and cruel that a psychologist specializing in trauma would use something like gaslighting on a patient who was previously abused by a narcissistic pedophile with anti-social personality disorder - who tried to ground them down into nothing so he could remake them into the perfect victim, threaten to kill them many times, broke bones, held knives and a gun to them countless times and finally tried to kill them and then stalked them for a year just to name a few things. I'm curious if this was a ploy to cover his ass about a bad day (why the sudden departure from his previous opinions though?) or whether this is much like my ex slapping me in the face and seeing if I came back. Period of reconciliation followed by an escalation of behavioviolence and the cycle continues. What does this dude want? If you guys know of any stories of psyches abusing clients that seems reminiscent I would be curious to know. I have no idea what to make of this aside from that was not about me, he contradicts himself and now he's playing some kind of game, to what end, I do not know yet.
I feel relieved I was the one who received that phone call because someone in a bad way or suicidal would have taken that on, done something destructive, self harmed or committed suicide after that session - that is not just my thoughts on the matter either. Other professionals have said that behavior violated so many codes of practice, the mental health rights charter, ethics, basic human decency, the list goes on - they have also advised that I contact the relevant bodies to complain. I will admit I have trust issues but they are in the beginnings of a relationship (not four years in after trust had been developed), only with intimate partners and I have only overthought signs and things people have said. I trust professionals more than people and never had a problem with people calling me on my bullshit. Never in my 8 year psychosis was I paranoid - in fact I was too trusting. The only thing remotely along the lines of paranoia was that hallucination dude that followed me for a year - even when I couldn't see him I had an intense feeling of being watched, hated and like a predator stalking it's prey. I have never imagined an hour of being berated with a undeniably aggressive tone which brought me to tears for an hour. I can barely cry for a minute before numbing out (except on my dad's birthday and certain large triggers) by myself and struggle to do that at all in the presence of another. Crying for an hour is remarkable for me. Even on my dad's birthday it is on and off, not a solid hour. Anyway! Yet again, I digress. I gave context to outline why I am unsure of the spirit in which that label was given was sincere and the fact it was not explained (never heard of it before) nor was explained why he believed that. Dude doesn't believe in labels but throws one in that tone, cutting me off mid sentence to shut me up and invalidating my thoughts, feelings and right to express them at all. Thanks buddy, great session.
**Backstory to the death in question:**My father was my idol, I adored him. He died suddenly when I was 16 while we were going through the typical issues parents have with rebellious teenagers. His death seemed to be stress related and although his job had him stressed out of his mind living on three hours of sleep a night and retrenchments were common - I still feel responsible because I was a shit head when I was a teenager. My delightful mother also made a point of saying that nearly 10 years after the fact out of nowhere. Surely her shopping addiction putting them in debt and refusing to work did wonders for him too. She had him in debt before I was even born. She magically got pregnant with me on the pill while my dad was getting ready to leave her. My aunt told me this as an adult and told me he stayed with her for me which makes me happy but also terribly guilty. He would probably be alive if I did not exist and he left my mum. The year before he died I skipped year 10 for submitting some creative writing (hyperfocus) for English but between wagging for years and no foundations (maths methods was... not easy to catch up on) on top of mental health issues AND the pressure/expectations from my parents, teachers and classmates to do well, I started problem drinking, wagged some more, lost my shit 6 months in and left. My dad was disappointed and I was disappointed in myself for disappointing him... again. Every year my teachers told my parents I was exceptionally gifted or had great potential but needed to apply myself and pay attention, skipping a year gave him hope that I finally cared enough to try. Whether I tried or not it made little difference (hello undiagnosed ADHD.)
The day he died I didn't cry. My world was falling apart but the best I could do is stare into space when I was finally alone that night. I made sure my mum was nice and dosed up with valium, in bed and asleep. I couldn't bear to stay there that night but worried she was going to kill herself so I took all her prescription meds with me, leaving only a weeks supply. At the funeral I didn't let myself cry, I looked after my mum, brother and nana. I spoke on behalf of quite a few people. The chapel was filled to the brim and most people I did not know. Most people were standing. Apparently friends and colleagues flew in from around the world to be there. I was a pallbearer and I kept my eyes on the ground, I could feel their pity and didn't know where else to look. After the service so many strangers came to shake my hand and tell me how wonderful my dad was. I was emotionally checked out. I woke up the day after death to call asking about organ donation which is a crappy way to wake up. I got up and I was angry the sun could just rise again without him and it was such a beautiful and sunny day, I felt the sun was mocking me. The first week of his death I was hallucinating a lot, at first I chalked it up to a lack of acceptance and wishful thinking but they persisted and grew more intense. I was starting to feel as though my dad was desperately trying to communicate that it was all real. I think the first time I cried was when I finally went "WOW! It's all real", it was absolute tears of joy, comfort in knowing what happens after death. I miss that feeling of comfort and meaning in life, now I'm adrift and agnostic. When he died my covert narcissist mother no longer had anyone to keep her in line and she also stopped being a parent. This also gave the greenlight to the aforementioned pedophile to move from grooming and psychologically/emotionally abusing me to physically and sexually abuse me (all types of abuse in the end) knowing I was completely isolated socially (had just dropped out of school due to a breakdown) and that my protector, my father, was no longer of consequence.
The day my father died an 8 year psychosis began with strong spiritual themes where I believed I could see, hear and communicate with spirits and that witchcraft/manifestation were totally real. Ended up spending two of those years working as a medium/clair-everything/healer, I went off the deep end completely. ("Psychics" aren't conning in my experience working with a bunch of them - they wholeheartedly believe in what they do.) I think if I wasn't surrounded by people just as dippy I would have been hospitalized, my abuser used this psychosis to his own ends too, he wasn't going to get me help. I had a few visual hallucinations in the year before he died which were very frightening and I'm still afraid of the dark til this day because of them. I knew those weren't real but from the day my dad died I started to lose insight and lost insight after a week. The hateful scary man/thing/monster that stalked me for a year disappeared within a month. I believed my dad took care of him. Fortunately, for the following 8 years where I hallucinated throughout the day, every day, were no longer frightening hallucinations, often comforting in fact. I went from an atheist to suddenly believing I knew the nature of life and death - my father wasn't truly gone and was around every day. It offered me an opportunity to make peace for our petty differences during my rebellious teenage years. Despite my grandparents, uncle and teacher (who had bipolar herself) telling my parents they thought I was bipolar many, many times - my parents got angry with them for suggesting it and told me there way nothing wrong with me, minimized my pain as "drama", told me I couldn't be depressed because I wasn't always depressed and "could lift my arms to wash my hair" or "had nothing to complain about". Along side bipolar, I had undiagnosed ADHD and several other disorders going on. I wasn't diagnosed with anything except C-PTSD until my mid to late 20s and was finally put on medication which changed my life completely and offered me quality of life finally. The psychiatrist they sent me to as a teen... don't even get me started. His online reviews from other people suggest I was not the only one who found him belittling and condescending. He smirked at me like it was some laughable, flimsy lie when I told him about those hallucinations which scared the life out of me. Saw him for three years, he put me on seroquel after the second session which I took faithfully (but didn't stop the hallucinations that started a year later, maybe when they started he should have raised it from 250?!). He did not diagnose me with anything (even the physically obvious anorexia). He told my parents there was nothing wrong with me (why the seroquel then?). This did not help with the tensions at home and made my parents furious with me - apparently depression was just an excuse to be a screw up. I saw that psychiatrist once more as an adult out of curiosity and asked him what he thought back then and he said "oh, a mood or personality disorder" - clearly not "nothing". Again, sorry for too much detail and tangents! Back on track! After the 8 year psychosis ending in late 2016/early 2017 I crashed into a deep depression, the psychosis ended and I grieved the loss of its beauty, comfort and meaning then the terrible pain on his birthday really came in at full force, before it was just a day.
The rest of the year I feel fine, I intellectually know that my father is gone, most days I don't even think about it. Since the psychosis ended, his birthday brings up more than I ever expect to be there. Even if I haven't thought about it or gotten emotional yet, my body knows. One year I had a panic attack the day before with no clear thought behind it (I don't really get panic attacks). I woke up strangely tearful for no reason that day which lingered, no thoughts really. Then a panic attack which I could not explain - not even sure it was 100% about my dad but the timing make sense and seems to be part of a pattern, that was the first birthday after the psychosis ended. This year was absolutely brutal, it hurt more than any other year or the immediate aftermath after his death. Sent me manic AF within days. Every year it sneaks up on me. It feels like an absolute shock of realization that he is actually gone and the gravity of that despite always intellectually knowing he's gone. A recording on an answering machine he forgot to turn off from 1997 was found exactly a week after his birthday this year where he was speaking to me as a young child - he said all the most perfect things you could hope for as the only recording. Just a week before I spoke into the silence, whether he can hear me or not, it helps to verbalize, and I said I missed the sound of his voice the most and presto, heard it again, he sounded so young. Before I could bring myself to hit play, I was in tears on and off for hours before hand (rest of the year I can't really cry for more than tears coming up and then numbing out), I started to hyperventilate and it surprised me how something so good could cause such a profound reaction. Before hand I would have thought it would be simple, yay the recording was found and listen to it but I completely freaked out. It was completely irrational. After 12 or so hours of crying spellings and hyperventilating on and off I decided enough was enough and took a valium and went to bed. Next day it was fine, brain filed it away and I was logical again. Could not fully access that emotion I only knew of it.
My point is every year the pain doesn't go away and perhaps even increases. I have lost all my grandparents and even a friend to suicide and my processes were very different, the first month was teary, the next year kinda ouch but then it settled down. I don't know if I'll ever truly get over the loss of my father. The more he misses out on in terms of meeting his grandson or enjoying retirement (he died at 49, not a smoker or drinker, exercised and ate well) - it makes the hole in my heart and life greater. Looking up CG speaks of denial (check) but also of wishing to be with them and suicidal thoughts. I had the psychosis which allowed me to be with him in a way but I have never thought to end my life to join him. I'm not sure if that's a strict criterion. Literally the rest of the year it doesn't bother me but the birthday hits me like a ton of bricks. I don't count down the days or dread, I feel fine the day before but bam on the day I cannot describe the pain or hysterics. Struggle to cry at the best of times but on that day I feel like I can't stop or hold it in. It's the only day I lie on the grass, look up at the stars (he was an astronomer, there a special memories of him showing me stuff through the telescope) and just talk into the silence, just anything on my mind, what I miss, wanting to hug him, wondering if he can hear me or I'm just talking into the night like a lunatic and if I'll ever see him again. Nothing has affected me this much. Not even being abused severely for 4 years. I'd happily take another 4 years of abuse if it meant I could see him just one more time - that sounds silly and dramatic but that it really how I feel about it. The only thing I could think of that could compare in pain would be if anything happened to my son, god forbid, touch wood. Ok, now I write this all down it does sound disordered or at least weird, it's been 11 years and time brings me no peace. Apparently being female, having a mood disorder, high stress, pessimistic, lack or social supports etc. makes me more likely to have it... I don't know. I would talk to my psychologist about this but apparently that has gone out the window and I feel guarded and wouldn't want to share things close to my heart with him or be vulnerable. Feel safer asking strangers on the internet who may or may not be psychiatrists haha oh dear.
Thank you for taking the time to read and getting to the end of my epic novella - hope my tangents were appropriate and valid and not too extraneous. Any thoughts about absolutely anything I have said would be greatly appreciated :-)
submitted by the_dark_aspect to AskPsychiatry

0 thoughts on “Hack speed gear 5.0

Leave a Reply

Your email address will not be published. Required fields are marked *