Edan Meyer
Edan Meyer
  • Видео 99
  • Просмотров 1 501 808
The AI/ML Master's Experience
I did a 2 year Master's degree in computer science at the University of Alberta. My studies focused on research in machine learning (ML) and reinforcement learning (RL). In this video I talk about what you do for an MSc, coursework and research, and I also talk about how much it costs / how much you can get. I mainly focus on Master's degrees in Canada because that is what I did.
Outline
0:00 - Intro
0:31 - Overview
0:58 - My experience
3:55 - Different types of Master's
Social Media
RUclips - ruclips.net/user/EdanMeyer
Twitter - ejmejm1
Sources:
My Master's research paper: arxiv.org/abs/2312.01203
My thesis: era.library.ualberta.ca/items/d9bc72bd-cb8c-4ca9-a978-e97e8e16abf0...
Просмотров: 2 771

Видео

This is What Limits Current LLMs
Просмотров 92 тыс.Месяц назад
Recent advances in large language models (LLMs) have centered around more data, larger models, and larger context lengths. The ability of LLMs to learn in-context (i.e. in-context learning) makes longer context lengths extremely valuable. However, there are some problems with relying on just in-context learning to learn during inference. Social Media RUclips - ruclips.net/user/EdanMeyer Twitter...
I Talked with Rich Sutton
Просмотров 14 тыс.5 месяцев назад
After a year of anticipation, I finally got to talk with Rich Sutton for the channel! Rich wrote the book on reinforcement learning, has contributed vastly to the literature, and has a unique perspective on AI. We talk about topics like reinforcement learning, the OpenMind Research Institute (OMRI), Keen Technologies, how to do good research, the problem of scale, superintelligence, and so much...
GPT-4 Outperforms RL by Studying and Reasoning... 🤔
Просмотров 26 тыс.10 месяцев назад
Let's take a look at "SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning". The authors use GPT-4 to learn an agent that can play Crafter, getting it's information from a specification paper. They claim it's better than reinforcement learning, but is that really the case? Should we be using LLMs instead of RL? Outline 0:00 - Intro 1:02 - Crafter 2:39 - LLM Learning 7:40 - ...
Training RL From YouTube Videos
Просмотров 6 тыс.Год назад
Reinforcement learning is great, but environment interaction can be expensive. This paper proposes an RL algorithm based of successor features that takes advantage of passive data to learn about the world without acting itself. Outline 0:00 - Intro 1:41 - Offline-RL 2:50 - Successor Features 13:34 - Algorithm 21:11 - Results 27:50 - Criticisms & Thoughts Social Media RUclips - ruclips.net/user/...
Sparks of AGI: What to Know
Просмотров 32 тыс.Год назад
Spark of AGI, a new paper from Microsoft, takes us through over 100 pages of GPT-4 experiments to argue that it shows early signs of artificial general intelligence. The results are impressive, but there are also still clear areas for improvement and indications of the limitations of LLMs like GPT-4. Outline 0:00 - Intro 1:01 - Experiment Speed Run 1 3:51 - ClearML 4:54 - GPT-4 Flaws & Math 8:3...
This Embodied LLM is...
Просмотров 7 тыс.Год назад
PaLM-E is a new LLM from Google that is both embodied in multimodal. Excitingly, it shows positive transfer across different robotics tasks. While the prospects are exciting, it is unclear what other conclusions can be drawn from the work. Outline 0:00 - Intro 1:34 - How It Works 6:47 - Robotics Tasks 11:25 - Results 26:50 - Takeaways Social Media RUclips - ruclips.net/user/EdanMeyer Twitter - ...
GPT-4: What, Why, How?
Просмотров 25 тыс.Год назад
OpenAI's GPT-4 makes significant improvements over GPT-3 and 3.5, but does it live up to the hype? It will soon support images, and is already multi-lingual, supporting context lengths up to ~32,000 tokens. It works great but we know next to nothing about it. I also talk about how to get access to GPT-4, along with a demo using ChatGPT. GTC Signup: nvda.ws/408jS7w Raffle Signup: forms.gle/NyxKR...
ChatGPT Is a Dead End
Просмотров 15 тыс.Год назад
ChatGPT can only go so far, let's talk about the limitations of LLMs and topics like grounding and exploration that are important research directions for future work. GTC Signup: nvda.ws/408jS7w Raffle Signup: forms.gle/NyxKRw2tF6Un9CaT6 Sessions I'm attending: - bit.ly/nvidia-science - bit.ly/nvidia-real-world-rl Outline 0:00 - Intro 0:48 - GPU Giveaway 1:06 - Human-Level AI 2:38 - My Thoughts...
RL Foundation Models Are Coming!
Просмотров 21 тыс.Год назад
AdA is a new algorithm out from DeepMind that combines interesting ideas like curriculum learning, meta reinforcement learning (via RL^2), model-based reinforcement learning, attention, and memory models to develop a prototype for a reinforcement learning foundation model. The results look promising, and the future of this area looks bright! Outline 0:00 - Intro 1:07 - Example Video 2:40 - Clea...
This Algorithm Could Make a GPT-4 Toaster Possible
Просмотров 111 тыс.Год назад
The Forward-Forward algorithm from Geoffry Hinton is a backpropagation alternative inspired by learning in the cortex. It tackles several issues with backprop that would allow it to be run much more efficiently. Hopefully research like this continues to pave the way toward full-hardware integrated AI chips in the future. Outline 0:00 - Intro 1:13 - ClearML 2:17 - Motivation 5:40 - Forward-Forwa...
Model Based RL Finally Works!
Просмотров 31 тыс.Год назад
Dreamer v3 is a model based reinforcement learning (MBRL) algorithm that performs well over a wide variety of environments, from 2D to 3D, simple to complex. It can get diamonds in a simplified version of Minecraft and work well out of the box without tuning. Outline 0:00 - Intro 1:32 - World Model 13:22 - Actor Critic 18:32 - Results 22:52 - Minecraft Results 25:16 - Thoughts & Conclusion Soci...
Using ChatGPT to Write My PhD Apps
Просмотров 9 тыс.Год назад
Computer science / ML PhD apps are boring, so let's use ChatGPT to write one for us! It works kinda well I guess. Not perfect, but pretty impressive I'd say! I got some better results with ChatGPT in the past, so it looks like the quality of the outputs can have a bit of a range. ClearML: bit.ly/3GtCsj5 Outline 0:00 - Intro 1:27 - ChatGPT Example 2:07 - ClearML 3:13 - Writing my SoP 8:46 - Givi...
The Best of NeurIPS 2022
Просмотров 15 тыс.Год назад
Let's talk about all the NeurIPS 2022 outstanding paper awards. There were a good mix of papers, but especially a lot of work in diffusion models and optimization with both empirical work and theory. You can find links to all of these papers at: blog.neurips.cc/2022/11/21/announcing-the-neurips-2022-awards/ ClearML: bit.ly/3GtCsj5 Outline 0:00 - Intro 1:03 - ClearML 2:08 - OOD Detection 4:21 - ...
Agent Learns to do Reinforcement Learning
Просмотров 10 тыс.Год назад
Agent Learns to do Reinforcement Learning
What's New In Machine Learning?
Просмотров 36 тыс.Год назад
What's New In Machine Learning?
12 Steps to AGI
Просмотров 21 тыс.Год назад
12 Steps to AGI
Stable Diffusion - What, Why, How?
Просмотров 228 тыс.Год назад
Stable Diffusion - What, Why, How?
Google's New Model Learns College-Level Math
Просмотров 6 тыс.Год назад
Google's New Model Learns College-Level Math
This AI Learns from YouTube!
Просмотров 10 тыс.2 года назад
This AI Learns from RUclips!
AGI is NOT coming soon
Просмотров 21 тыс.2 года назад
AGI is NOT coming soon
Is Gato Really the Future of AI?
Просмотров 156 тыс.2 года назад
Is Gato Really the Future of AI?
DALL-E 2 is… meh
Просмотров 50 тыс.2 года назад
DALL-E 2 is… meh
Chinchilla Explained: Compute-Optimal Massive Language Models
Просмотров 19 тыс.2 года назад
Chinchilla Explained: Compute-Optimal Massive Language Models
Learning Forever, Backprop Is Insufficient
Просмотров 18 тыс.2 года назад
Learning Forever, Backprop Is Insufficient
Is AI Research On the Wrong Path?
Просмотров 4,2 тыс.2 года назад
Is AI Research On the Wrong Path?
Doing ML Research as a Graduate Student
Просмотров 4,1 тыс.2 года назад
Doing ML Research as a Graduate Student
Learning Fast with No Goals - VISR Explained
Просмотров 4,8 тыс.2 года назад
Learning Fast with No Goals - VISR Explained
AlphaCode Explained: AI Code Generation
Просмотров 61 тыс.2 года назад
AlphaCode Explained: AI Code Generation
Is Machine Learning Research Oversaturated?
Просмотров 7 тыс.2 года назад
Is Machine Learning Research Oversaturated?

Комментарии

  • @nikidino8
    @nikidino8 11 часов назад

    I'm at the challenge part and haven't watched further: Continual learning provides flexibility to an otherwise static frame which means it can go outside of its initial scope. In theory this allows an AI to improve on the fly based on input and if combined with a large context length it allows for pretty scary things. Now the biggest problem with that is probably the learning periods which would be tremendously expensive and have to be streamlined in a very efficient and clever method.

  • @FATStereo
    @FATStereo 14 часов назад

    Surely if you are able to solve difficult problems, you are forced to develop rich internal representations? Humans are barely able to articulate how they come up with particularly creative solutions, its all buried in internal representations.

  • @zoeherriot
    @zoeherriot 6 дней назад

    Same for game development - you spend a lot of time coming up with novel solutions for problems - you won't have any training data for most of it.

  • @veeratzxmatey6146
    @veeratzxmatey6146 6 дней назад

    Why don't you try sparking neural network to create patterns and use it convert the data in real time

  • @zachb1706
    @zachb1706 8 дней назад

    IDK but I’ve been testing Chat-GPT 4o and its answers are pretty amazing.

  • @TheEarlVix
    @TheEarlVix 12 дней назад

    Liked, subscribed and followed you on X too, Edan. Thanks for your insights :-)

  • @dosmastrify
    @dosmastrify 12 дней назад

    Training models as you guys are is how humans learn generally

  • @maksym.koshovyi
    @maksym.koshovyi 14 дней назад

    If there's no documentation for the library, what will you train your model on? If there's not enough information for context, how is there enough information for learning?

  • @iAPX432
    @iAPX432 15 дней назад

    I would think it's probably more interesting to start with a micro LLM (few Gigabytes weights), to not have to pull the weight (pun intended) of too much pre-learning. I would also avoid to add multiple iterations of a documentation, but only differences after the first learning. There's a lot to experiment, and eventually learn.

  • @flygonfiasco9751
    @flygonfiasco9751 16 дней назад

    That’s kinda the reason why the default education style in the US and probably much of the world is a liberal education. Being well-rounded is important to problem-solving

  • @gregorymorse8423
    @gregorymorse8423 17 дней назад

    The problem is research requires logic hierarchies far beyond the capabilitiriew of what AI is doing now with a few attention layers. Continual learning is not solving the problem either. Its just creatung more useful models for non research purposes. Sure some niche advances could come but none of this os real AI or fundamental peogress it is just evolutionary atop the current LLM foundation.

  • @juanpablo_san
    @juanpablo_san 18 дней назад

    I'm officially a fan of Rich! hahaha a lot of good insights, thanks for sharing!

  • @markklunis403
    @markklunis403 18 дней назад

    Utterly without redeeming social value Bury it.

  • @dr.akshayprakash5735
    @dr.akshayprakash5735 18 дней назад

    Has anyone built an AI chatbot for a client/ company? If so, I wanted to know if a tool that monitors your AI chatbot for incorrect or dangerous responses and alert the developer and log it when it happens would be useful? Me and my friends had built such a AI monitoring tool for a hackathon and wanted to know it would be helpful for others.

  • @gurukhan1344
    @gurukhan1344 18 дней назад

    24:11 I noticed that hidden layers combine top-down inputs and bottom-up inputs, but why there are blue arrows for hidden layers to top layers?

  • @anuragangara2619
    @anuragangara2619 19 дней назад

    I think this was the limiting factor for me trying to build an AI chess coach. I could give it access to chess analysis APIs in-context and explain how to use them, but I couldn’t get it to actually reason about the positions and break them down for a human. Chess was the poetry in your example. Surprised because I’m sure there’s plenty of chess content in the training data for large models, but I guess chess is orders of magnitude more complicated than most topics, so it needs much more targeted and focused coaching data 🤷🏽‍♂️

  • @herp_derpingson
    @herp_derpingson 19 дней назад

    We dont do continual learning in production not because it is not technically possible, but because it is not economically possible. As always GPUs are the answer. You cant run from the bitter truth.

  • @MarkBruns_HarmFarm
    @MarkBruns_HarmFarm 19 дней назад

    The credential is nice ... it MIGHT open some doors ... but it's important to establish a lifelong habit of being able to do serious study without institutional handholding. Yes, of course, self-starting is not for everyone ... not even for most ... but consider doing your own AUTODIDACTIC Masters of Science deep dive [on an accelerated, self-paced schedule]. A big part of autodidactic study involves networking/outreach, which means it's up to you to build your own network [rather than having the University furnish connections for you] ... in order to come up with something like a more dedicated, but still ad hoc advisory committee with an appointed a thesis advisor [an co-author] to be on record as having reviewed / helped revise your work [in early draft stage, before polishing and submitting it to Arxiv]... but the outcome is similar; it sort of requires some artifact of your study like a paper with code/data [which you might "publish" on Arxiv or other equivalent channels].

  • @rursus8354
    @rursus8354 21 день назад

    1. 6:07: there's no such thing as _"surpass the human knowledge"_ LLM:s use human knowledge as their database. 2. I'm positive towards LLM:s but we must realize that they are inferior to ourselves, that cannot achieve our precision of thinking, and a tool to shortcut *_some_* research that would have taken hours to perform all by ourselves. 3. we must realize that when using LLM:s we don't train our own vastly superior intuition, and get a lack-of-training deficiency that we need to compensate for later, 4. we are the testers and the reality check: in order to serve LLM:s with new human knowledge, we are morally obligued to publish some of our code OpenSource on Internet for improved LLM:s in the future, 5. when will LLM researchers acknowledge that a better system is when LLM:s interacts with a logic resolver?

  • @jorgwei8590
    @jorgwei8590 21 день назад

    Technical question: How do you create the training signal for real time learning while interacting with a human? How do you tell which answers are more successful than others, especially in new situations that have no known answers? In other words: Having the weights shift over time is one thing, but how do you give it direction so it shifts towards the right capabilities? One way I could come up with is that you train it on user feedback, but I don't think continuous RLHF with thumbs up or thumbs down would work. The users don't know yet what a good solution looks like and we don't want to rate things all the time. It's just not practical. So the model would need to learn to recognize other patterns in the environment, e. g. in our ongoing behaviour, indicating whether its output moves us towards a solution. But that pushes the question one step further. How does it learn to assess those patterns? Or is the approach something entirely different? One more thing: Real time learning also sounds a potentially a bit risky in terms of alignment? The explicit goal is to have it change its behavior in real time. We cannot successfully set boundaries on models that are static. What issues will arise with this? How do yo control what it optimizes for in a work environment, interacting with the world and humans in real time?

  • @jorgwei8590
    @jorgwei8590 21 день назад

    This to me was one of the big differences between us and whatever LLMs are doing. The model is trained and then static. Any "context" given can be viewed of part of the prompt. It's static. By making the model learn from the world in real time, by allowing to develop in relation to it, it starts to really be in the world. To me, this points to a lot more than just a promising way for better results, it's a philosophical difference.

    • @rursus8354
      @rursus8354 20 дней назад

      There's also an inherent problem with ANN:s: they require hundreds of iterations to learn, while say an earth worm only requires 15 times to avoid taking a path that gives them an electric shock.

    • @Hohohohoho-vo1pq
      @Hohohohoho-vo1pq 18 дней назад

      ​​@@rursus8354you missed the part were animals, including humans have already baked in important information into our brains from birth. It's stored in our DNA. That took hundreds of millions of years of evolution.

  • @surgeonsergio6839
    @surgeonsergio6839 21 день назад

    I'm surprised as to why this kind of stuff isn't more popular on youtube.

  • @Sigmatechnica
    @Sigmatechnica 21 день назад

    sounds like it would be very usefull for devops workflows where a bunch of often poorly documented systems need to be joined together, often by some degree of trial and error

  • @schwartztutoring
    @schwartztutoring 22 дня назад

    Why use continual learning? After pausing the video my guess is to reduce marginal cost of execution. Instead of context stuffing and consuming that many tokens by embedding the knowledge into the weights it reduces output size/cost

  • @valentinrafael9201
    @valentinrafael9201 22 дня назад

    Math literacy, even amongst computer scientists, is needed. Otherwise you think that AI there is real. And just like that lady on the plane said, "that mf aint real".

  • @chrisbo3493
    @chrisbo3493 22 дня назад

    This sums up my current Evaluation of the LLM hype: those models are limited (by input data, like quality and field/focus). I do not see the hyped exponential growth, just bigger training provided by more data and computing power. And regarding creative and smart combination solutions for (new) tasks, without really good prompting leading the LLM nothing happens in that direction.

  • @alexsov
    @alexsov 22 дня назад

    I am totally ok with RAG. Specially when context is large. In last project i have 95% retrival accuracy. It is more then just enough

  • @Drone256
    @Drone256 23 дня назад

    If you continually fine tune then you eventually lose much of what your old weights “knew”. How do prevent over fitting and forgetting?

  • @Zharath
    @Zharath 23 дня назад

    That fake laugh at 2:42 is cringe af

  • @tk_kushal
    @tk_kushal 23 дня назад

    the first thing that comes to mind when thinking about using continual learning as compared to RAG, is that the LLM is quite like our brains, and we can't effectively retrieve all the relevant information or solve the problem with comparable accuracy if we are seeing something for the first time, even if we have all of the context there is.

  • @Asian_Noodle
    @Asian_Noodle 23 дня назад

    I wish this was continued 😭

  • @isunburneasily
    @isunburneasily 24 дня назад

    In the event of pursuing research in the realm of "Theoretical Physics" or any theory-based research that prolongs for decades and decades before a theory is debunked... or until a new theory arises, what effect would that have on a Large Language Model applied to these areas? Having an LLM trained on theories that in the end may prove false would create a "hyper-intelligent" LLM built on incorrect data... would it not? Is this an area of research that allows for the application of LLM's? If so, to what extent? Perhaps this is not an area of research an LLM would even be trained on. I am in no way educated on the subject, but this video did immediately bring to mind those questions. I think continual learning may prove successful only in the event we are continually learning in a direction that leads to a "correct" conclusion.

  • @HB-kl5ik
    @HB-kl5ik 24 дня назад

    You'd like to bless the model with better priors, there are some tasks like writing Smuts the model simply can't do. It makes sense to continually train on that, and becomes more important as vision gets introduced as modality.

  • @kadourkadouri3505
    @kadourkadouri3505 26 дней назад

    LLMs do come up with solutions based on mosaic analysis -or combining different relevant assumptions in one analysis. Even for human it is not straightforward. One research paper may take more than five years to complete in certain cases

  • @kurtdobson
    @kurtdobson 26 дней назад

    Domain specific AI trained only on factual data is a great trajectory at this point..

  • @krishnabharadwaj4715
    @krishnabharadwaj4715 27 дней назад

    For me, RAG is just cheating. It doesn’t do anything substantial. It gives false hope to people that it’s going to solve problem. It’s always going to be a half baked solution. BUT if your search is powerful like hooking it up to a Google Search, it can be effective and good enough for most use cases. If you are hoping it would solve your company’s dataset or if you truly want your LLM to know it all/have context, you are going to be disappointed.

  • @miclo_ssx
    @miclo_ssx 28 дней назад

    Current LLMs can only do what has been done before.

  • @elliotevertssonnorrevik9379
    @elliotevertssonnorrevik9379 29 дней назад

    I have been thinking about this a lot for a couple months now, would love to chat more with you, just added you on LinkedIn

  • @spiralsun1
    @spiralsun1 29 дней назад

    I was thinking this would be about the censorship 😂 To me as a highly intelligent creative neurodivergent person it’s absolutely insane that they would spend billions to make the greatest creative tools ever in these generative AI’s and then completely hamstring them for anything truly creative by the horrendously stupid paranoid censorship that keeps growing like a cloud from Mordor…. The first company that makes a version that includes the concept of freedom and inclusivity for creative adult people will become the #1 in the field. And they will richly deserve it. I will support them 100% and I would even volunteer and promote them and fight for them. It’s absolutely vitally important that FREEDOM gets baked into AI at the outset-not fear-based despotism and control.

  • @Johnb-ix7cp
    @Johnb-ix7cp Месяц назад

    How does your startup avoid catastrophic forgetting when doing continuous learning?

  • @domanit927
    @domanit927 Месяц назад

    well if you succeed, let me see the research paper once published.Send me a link when its done.

  • @maksadnahibhoolna-wc2ef
    @maksadnahibhoolna-wc2ef Месяц назад

    what's the whiteboard tool you were using btw ?

  • @earu723
    @earu723 Месяц назад

    So what’s the solution? Feed it niche documentation and then critique every response as you get them? I.e. provide feedback as you go so you train the model as you yourself learn?

  • @Julian-tf8nj
    @Julian-tf8nj Месяц назад

    you voiced some of my misgivings about the RAG in-context learning approach 👍

  • @Mr.Andrew.
    @Mr.Andrew. Месяц назад

    I like the part where it said to email you but then email is nowhere in sight. :)

  • @agranero6
    @agranero6 Месяц назад

    I keep saying this for ages. LLMs are just that Language Models...LARGE ones. Intelligence predates language. Cats are intelligent, dogs also, some animals show abstract reasoning. Abstraction is a condition for language not a consequence. What LLMs lack is a model of the world that is continuously updated by experience. If you make sitational questions for LLMs they fail miserably (A is on the left of B, all in the corners of a room, etc). IA today is passive: it can't explore the world an test and try it and it will never be able to do that without an internal model of the world that is constantly update by experience. This is intelligence: that capacity to learn and generalize experiences with a *model of the world* , a multi level model of the world with capacity of abstraction: capable to separate properties of objects from objects (consider the texture of a leaf independent of the leaf the color independent of the leaf and generalize this concepts to other objects. It is particularly compelling to study intelligence from the evolutive point of view: why it happened and how. Rodolfo llnás (a researcher that studies the Cerebellum and created a model of it) believes that it evolved because living things that move and need this model to navigate the world and its dangers. This is backed by some beings that have a movable period o their lives and then become sessile organisms and lose their brains after that. this may seem a bias as the cerebellum that llnás brilliantly studied is related to movement, but makes a lot o sense. The claims of sparks of GAI (or whatever the term they used) from the current owners of AI are simply preposterous. But the hype is too loud for anyone to hear. Even Geoffrey Hinton said similar things (although he has some self conflicting declarations). There are some dissonant voices in the choir, a lot in fact but media don't like headlines like "hype unfounded" or "this won't be done with that". Now I will watch your video I needed to take this out of my chest....again.

  • @darthrainbows
    @darthrainbows Месяц назад

    I have to question whether LLMs - even in their most advanced form - are even plausibly capable of doing what you've declared your goal to be (generating brand new ideas that have never been thought of before). LLMs are prediction engines, and in order to predict, they learn from the past. How can one predict what has never happened, strictly from learning from the past? You might be able to predict that something new will happen, but never what that would be. I su[ppose that could be interesting in it's own right; the combination of factors X, Y, and Z results in a hole in the LLM's prediction. _Humans_ are pretty terrible at this task. Rarely does anyone have a truly new idea. Most of what feels like new is just recycled bits and pieces of old, reassembled into a different whole.

    • @maalikserebryakov
      @maalikserebryakov Месяц назад

      you = npc

    • @darthrainbows
      @darthrainbows Месяц назад

      @@maalikserebryakov what truly insightful, original, inspired thought it must have taken you to construct such a brilliant ad-hominem attack! How clever of you to skirt the difficulties of constructing a counter argument! I'm sure your mommy and daddy are very proud of you. Gold star!

  • @souvikbhattacharyya2480
    @souvikbhattacharyya2480 Месяц назад

    I really liked the interview. Ignore other's comments.

  • @karlwest437
    @karlwest437 Месяц назад

    LLMs only imitate, they can't innovate

  • @Guywiththetypewriter
    @Guywiththetypewriter Месяц назад

    I think that the key context that needs training that would resolve this issue is a tool that we actually shun in modern society. Im an aerospace engineering lecturer and i preach this tool to all my students. Its the tool that let einstein and newton become household names of genius and innovation. Its how nasa won the space race. The best tool.in the human toolkit for learning... is our unfaltering moron. Gravity was the idea a sightless, senseless thing pulled every single thing to the ground. Einstein is famous for being the one physicist brave and stupid enough to follow through on the idea of "what if time relatively slows down if you speed up" AI need to be able to test utterly random and dumb ideas, even at risk of those ideas and approaches being wrong. You programme an ai with the context of research and innovation being non sensical steps devoid of logic other than a guese when all logical avenues are spent, i think we coild reach an approach that gets us over this hump.

    • @maalikserebryakov
      @maalikserebryakov Месяц назад

      You clearly have no clue about how time dilation was discovered kid. no need to make up cringy stories Lmao

    • @Guywiththetypewriter
      @Guywiththetypewriter Месяц назад

      @@maalikserebryakov I beg your pardon. I didn't get a first class masters degree in aerospace engineering to be called "kid" either 😂