The relationship between cause and effect is underpinning a wide range of in-depth reasoning about the world around us. We leverage this in a diverse set of applications. We aim to perform this causal way of thinking beyond statistical analysis and let the robot, as an active agent, conduct interventions and learn the underlying causal graph in the real world. Further, we convert the task of hierarchical reinforcement learning in a planning problem by using human demonstrations of the particular task and build a library of options. We can then use it and compose these diverse policies and solve novel tasks.