CausalAI
CausalAI
  • 11
  • 63 490

Відео

Causal Data Science -- Elias Bareinboim (@ 1st Workshop on Interactive Causal Learning)
Переглядів 2,4 тис.2 роки тому
Keynote by Professor Elias Bareinboim on "Causal Data Science: A general framework for causal inference and fusion (through computational lenses)," which was part of the First International Workshop on Interactive Causal Learning on June 2nd, 2022.
Tutorial Causal Fairness Analysis (ACM FAccT'21)
Переглядів 2,2 тис.3 роки тому
Tutorial presented by Professor Elias Bareinboim entitled "Causal Fairness Analysis", which took place at the ACM FAccT conference, Mar/4, 2021. This is joint work with Junzhe Zhang and Drago Plecko. Slides: fairness.causalai.net/
Towards Causal Reinforcement Learning (Tutorial)
Переглядів 2,6 тис.3 роки тому
Tutorial presented by Professor Elias Bareinboim entitled "Towards Causal Reinforcement Learning", which took place at Tel-Aviv, July 22nd, 2019.
Elias Bareinboim -- Causal Data Science Keynote
Переглядів 1,8 тис.3 роки тому
Talk by Professor Elias Bareinboim on "Causal Data Science: A general framework for causal inference and fusion," which was part of the Causal Data Science Meeting on November 12th, 2020.
Judea Pearl -- Data versus Science: Contesting the Soul of Data-Science [CIFAR]
Переглядів 1,9 тис.3 роки тому
Talk by Professor Judea Pearl on "Data versus Science: Contesting the Soul of Data-Science", which took place at the CIFAR Learning in Machines and Brains Meeting on July 30, 2020. The slides can be found here: causalai.net/cifar-july2020.ppt . Due to some technical issues with Zoom, the speaker couldn't see the slides and some of the slides were not fully synchronized with the video, apologize...
"On the Causal Foundations of AI" -- MSR Frontiers of Machine Learning (Elias Bareinboim)
Переглядів 1,9 тис.3 роки тому
Talk by Professor Elias Bareinboim on "On the Causal Foundations of Artificial Intelligence (Explainability & Decision-Making)" , which appeared at MSR's Frontiers of Machine Learning, July, 21, 2020. This talk is based in part on the chapter "On Pearl’s Hierarchy and the Foundations of Causal Inference" (E. Bareinboim, J. Correa, D. Ibeling, T. Icard), link: causalai.net/r60.pdf .
Causal Reinforcement Learning -- Part 2/2 (ICML tutorial)
Переглядів 7 тис.3 роки тому
Second part the tutorial presented by Professor Elias Bareinboim on "Causal Reinforcement Learning", which took place at ICML-2020 (online), July 13, 2020. For further details, references, and the slides, see crl.causalai.net .
Causal Reinforcement Learning -- Part 1/2 (ICML tutorial)
Переглядів 17 тис.3 роки тому
First part of the tutorial presented by Professor Elias Bareinboim on "Causal Reinforcement Learning", which took place at ICML-2020 (online), July 13, 2020. For the second part of this tutorial, see: ua-cam.com/video/2hGvd_9ho6s/v-deo.html For further details, references, and the slides, see crl.causalai.net .
Elias Bareinboim -- Causal Data Science
Переглядів 8 тис.4 роки тому
Talk by Professor Elias Bareinboim on "Causal Data Science: A general framework for data fusion and causal inference", which took place at Columbia University, April, 1, 2019.
Judea Pearl -- The Foundations of Causal Inference [The Book of WHY]
Переглядів 16 тис.4 роки тому
WHY-19 keynote speech by Professor Judea Pearl on the book of why and the foundations of causal inference, which took place at Stanford University, March, 25, 2019. The slides can be found here: why19.causalai.net/papers/why19-pearl.ppt For more information about the WHY-19 symposium, see why19.causalai.net. Credits: Video recording: Carlos Cinelli (UCLA) and Murat Kocaoglu (IBM Research); Edit...

КОМЕНТАРІ

  • @raminsafizadeh
    @raminsafizadeh 5 місяців тому

    Can barely understand a word! It is borderline rude to be so nonchalant about pronunciation and clarity of speech.

  • @edupignatelli
    @edupignatelli Рік тому

    Is there any publication that put these presentation in formal writing?

  • @ericfreeman8658
    @ericfreeman8658 Рік тому

    53:45 For counterfactual decision-making, "Agents usually act in a reflexive manner without consider the reasons or the causes for behaving in a particular way. Whenever this is the case, they can be exploited without never realizing. " I think it is just what people do in RL as exploration, e.g., \epsilon greedy. Is there any difference or did I miss anything?

  • @oscarv2293
    @oscarv2293 Рік тому

    Amusing👑! Boost your online stats = P r o m o s m !!!

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Рік тому

    7:50

  • @PengZhenwu
    @PengZhenwu Рік тому

    very good course!

  • @mbomba4415
    @mbomba4415 Рік тому

    Thanks For Sharing

  • @jimmychen4796
    @jimmychen4796 Рік тому

    hard to follow, really badly explained

  • @PengZhenwu
    @PengZhenwu Рік тому

    very interesting course!

  • @Fun-bz7ou
    @Fun-bz7ou Рік тому

    What's the difference between X and do(X)?

    • @rugdeeplearn7420
      @rugdeeplearn7420 6 місяців тому

      X represents a variable, whose state X is observed (i.e. you don't decide its value, you get it from data). On the opposite, do(X) represents the fact that you set deliberately the value of that variable to X (i.e. you DO an action, that corresponds to have your variable x=X)

  • @MrKrtek00
    @MrKrtek00 Рік тому

    Is it me, or he got the best porn-name ever? Anyways, it is a great talk

  • @c.s.842
    @c.s.842 Рік тому

    Terrible sound. Is it beyond MIT capabilities to furnish the speaker with a body microphone.? I have missed half of this very interesting lecture . What a shame!

  • @EmperorsNewWardrobe
    @EmperorsNewWardrobe 2 роки тому

    35:06 THE 7 PILLARS OF CAUSAL WISDOM 47:12 Pillar 1: graphical models for prediction and diagnosis 57:08 Pillar 2: policy analysis deconfounded 1:19:15 Pillar 3: the algorithmization of counterfactuals 1:23:29 Pillar 4? Formulating a problem in three languages 1:36:35 Pillar 5: Transfer learning, external validity, and sample selection bias 1:50:19 Pillar 6: Missing data 1:50:55 Pillar 7: Causal discovery

  • @davidrandell2224
    @davidrandell2224 2 роки тому

    AI will never ‘know ‘ the cause of gravity. Even though Galilean relative motion gives 50/50 odds that the earth approaches the released object: gravity. Cause of gravity: the earth is expanding at 16 feet per second constant acceleration. Common knowledge since 2002: “The Final Theory: Rethinking Our Scientific Legacy “, Mark McCutcheon. Try to keep up.

  • @kamalakbari5609
    @kamalakbari5609 2 роки тому

    Thanks for the nice talk, Elias!

  • @ewertondeoliveira1540
    @ewertondeoliveira1540 2 роки тому

    What is the intuition behind the "remainder" at 1:13:50?

  • @AjayTalati
    @AjayTalati 3 роки тому

    What does he mean when he says the agents causal graph G, captures the "invariants" of the SCM M of the environment? Any simple example?

    • @spitfirerulz
      @spitfirerulz 2 роки тому

      I think it means that information about causal relationships of the variables in an SCM M (e.g. which variables "listen" to which others) can be adequately described by graph G. This captures the key properties of the causal relationship which do not vary in different circumstances. We would still need M because, for instance, we need to describe whether the functions are linear, complicated etc.

  • @olivrobinson
    @olivrobinson 3 роки тому

    Really clear and awesome material. Thank you for this! I'll be moving on to the next video...

  • @michaeltamillow9722
    @michaeltamillow9722 3 роки тому

    29:00 - the example doesn't make any sense, since it says people are exercising MORE as they get older. In fact, based on the chart ALL 50 year olds exercise more than ALL 20 year olds. The logic of the eXercise axis is conveniently ignored to prove a point. Not a good example, and hopefully not how you conduct science...

    • @michaeltamillow9722
      @michaeltamillow9722 3 роки тому

      I should mention that I understand Simpson's paradox, I am simply commenting on the specific, contrived, example that does not work. I am not even fully convinced that Cholesterol (the latent variable) might be correlated with age between the ranges of 10 and 50 if all other factors are held constant.

    • @SterileNeutrino
      @SterileNeutrino Рік тому

      @@michaeltamillow9722 Good point. This diagram is actually on page 212 of the "Book of Why", something has gone wrong with that example. Maybe the 40 and 50 cloud should be shifted to the left? But it's all about projecting a high-dimensional point cloud onto fewer dimensions the wrong way, yielding a meaningless result, here one about the "typical person". (Cholesterol is also probably mostly correlated with sugar uptake IRL, but that's for some other time 🙂) Fun: "Yule-Simpson’s paradox in Galactic Archaeology"

  • @BenOgorek
    @BenOgorek 3 роки тому

    51:15 head hurting thinking about distinction between watching an agent do() something and watching an agent do something

    • @CausalAI
      @CausalAI 3 роки тому

      Hey Ben, I think the discussion after this summary slide may provide further elaboration, but let me know... -E

  • @WannabePianistSurya
    @WannabePianistSurya 3 роки тому

    God that question session was painful to watch.

  • @JamesWattMusic
    @JamesWattMusic 3 роки тому

    Interesting talk. I have a question about the vaccine example around 24:00. Why would you say the vaccine is "Good" if it killed more than the disease? Why would eradicating a disease with a more deadly cure be a good solution? For example, if a disease kills X people every year, should we kill 2X, 3X.... 10X, 100X the people at once to eradicate? Its an ethical problem. Thanks

  • @loljustice31
    @loljustice31 3 роки тому

    Very informative, thank you for uploading.

  • @sujith3914
    @sujith3914 3 роки тому

    It is unfortunate that the mindset of scaling up is sufficient to achieve the most sophisticated AI is a rather prevalent one and not one that is adopted by only a few. I guess there is a bright side to it that it provides people with very few resources a chance to make good contributions as well, because just scaling up is not sufficient.

    • @CausalAI
      @CausalAI 3 роки тому

      Hi Sujith, my hope with the tutorial is that if the examples and tasks are minimally clear, the understanding that scaling-up is not the only issue will follow naturally. In other words, there is no controversial statement, it's just basic logic. Deliberately, we designed the minimal or easiest possible examples so that this point could be understood; obviously, things just get more involved in larger settings.

  • @silent_monk
    @silent_monk 3 роки тому

    Thanks for the great talk. Is there a rough timeline for when we can expect the survey paper to be released? Looking forward to it.

    • @CausalAI
      @CausalAI 3 роки тому

      Hi Rootworn41, we are working on it, I am hoping to have good news soon! Thanks!

  • @kennethlee143
    @kennethlee143 3 роки тому

    This is an inspirational talk. I wish I can meet Elias in person one day!

    • @sujith3914
      @sujith3914 3 роки тому

      I know right, when he goes tangentially the talk gets even more interesting. I wish there was a platform where he is asked to just speak his mind, without any time limit, just outlining his interests, passion, vision etc.

  • @Ewerlopes
    @Ewerlopes 3 роки тому

    Amazing. I am trying to digest the literature of causal inference for quite a while now. I was not happy with the limitation of "association" methods. I really think that CI will ofter us the next level in the development of general AI. Thanks for the talk, prof. Elias!

  • @shashank7601
    @shashank7601 3 роки тому

    I'm not sure if this is the right place to ask this question, but if hypothetically you give an arbitary SCM to an RL agent, will it then be able to perform all layers of the ladder of causation including counterfactuals? And how would this arbitary SCM look like (ie. how is it robust enough to perform counterfactuals). Is this SCM just hard coded if - then statements given to the agent?

    • @CausalAI
      @CausalAI 3 роки тому

      Hi there, That's an excellent and somewhat popular question, thank you for the opportunity of clarifying. I hypothesize that this is the case since it goes against our strongly held belief that ae ll what we need is more data, or that data is enough, not the case in causal inference. I'll try to elaborate next. Given a fully specified SCM, all the three layers (i.e., any counterfactual) are immediately computable through definitions 2, 5, and 7, as discussed in the PCH chapter (causalai.net/r60.pdf). Call this SCM M. Unfortunately, there is NOTHING about the output of M's evaluation that makes it more or less related to the actual SCM that underlies the environment, say M*. The first main result in the aforementioned chapter is called the "Causal Hierarchy Theorem" (CHT) (page 22), which says that even if we train the SCM M with layer 1 data, it still doesn't say anything about layers 2 or 3. I will leave you to check this statement (hint: the chapter should help). In other words, it makes not so much sense to ask about the "robustness" of M's predictions, given that they are unrelated to M*. Cheers, Elias

  • @nihaarrshah
    @nihaarrshah 4 роки тому

    Hi Professor, are there slides available for this talk? Thank you! - Nihaar (your student)

    • @CausalAI
      @CausalAI 3 роки тому

      Hi Nihaar, I just saw your msg. See slides here: crl.causalai.net . Thanks, EB

    • @nihaarrshah
      @nihaarrshah 3 роки тому

      @@CausalAI thanks prof!

  • @zeeeeeeeeeavs
    @zeeeeeeeeeavs 4 роки тому

    Does anyone know where I can find the mathematical proof each level need infomation of that level or above for 3-level hierarchy (association, intervention, counterfactuals) of causality

    • @CausalAI
      @CausalAI 3 роки тому

      Hi Kalyana, I think you will enjoy this chapter -- causalai.net/r60.pdf .

    • @zeeeeeeeeeavs
      @zeeeeeeeeeavs 3 роки тому

      CausalAI Thank you so much! This is great!

  • @zeeeeeeeeeavs
    @zeeeeeeeeeavs 4 роки тому

    Thank you so much for the talk. Would you share the slides with us?

  • @fairuzshadmanishishir8171
    @fairuzshadmanishishir8171 4 роки тому

    nice speech

  • @Wavams
    @Wavams 4 роки тому

    40:39 add ] on first line

  • @Wavams
    @Wavams 4 роки тому

    at 38:18; should 2: off-policy learning read "agent learns from other agents' experiments"? or some other word instead of 'actions'. trying to see how to motivate the difference between samples from do(x) and x between 2 and 3

  • @fairuzshadmanishishir8171
    @fairuzshadmanishishir8171 4 роки тому

    Good Speech