Sunday, June 12, 2016

Inversion of Control [IoC] Vs. Dependency Injection [DI]

Inversion of Control [IoC] and Dependency Injection [DI]. They're both related, because DI is a pattern that applies the IoC principal, but they're not similar.

Inversion of Control [IoC]

This concept describes application control flow. Whenever the flow of control is inverted, that means that the responsibility for the order of execution of some code is relegated to a parent component, then you are in the presence of an inversion of control.

An event driven system is an example of IoC. Your methods are triggered by the events, but you are not in control of the order of these events, and when they trigger, meaning that you have lost control of the flow of your methods and delegated it to another component.

It doesn't have to be fancy, here's an example:

public class Plugin {
  public void onAction() {
    System.out.println("Action occured.");

Now imagine you gave this class to some other component:

public void main() {
  Plugin plugin = new Plugin();
  MasterController master = new MasterController(plugin);

In this example, you don't know when onAction will execute, it's not under your control, the Plugin class is not in charge of its full flow anymore, the control is now in the hands of MasterController.

As in my example, IoC became prominent mostly as a way to produce a plugin architecture, where functionality could be extended at run-time, by dynamically loading new code that plugged into a framework at known extension points. And similarly, frameworks often rely completely on IoC, in fact, a lot of people including myself like to distinguish a library from a framework based on this fact. A library has you calling into it, when you want, but a framework calls you, when it wants. This is often known as the Hollywood principle, and is the basis of Inversion of Control.

Dependency Injection [DI]

This concept is even simpler to understand. Whenever the things a component depends on are passed to it, instead of having it acquire or create them, you've got DI.

public class Guy {
  private BestFriend bestFriend;

  public Guy(BestFriend bestFriend) {
    this.bestFriend = bestFriend;

  public void makeImportantDecicion(String about) {
    if(bestFriend.thinksItsGoodIdea(about)) {

As you can see, Guy depends on BestFriend, but he does not create an instance of BestFriend or fetches one from anywhere, instead, it expects it to be passed in.

It doesn't matter how the dependencies are passed in, could be through the constructor, or through some setters methods, or any other means.

Note that the point is for the class to have all direct dependency be passed in to it. So its fine to have a Factory passed in, and then use the Factory to acquire instances of something else and use those, but it's not ok for the class to use a static factory method, since that would be a dependency that's not passed in and would go against DI.

Why they are often mistaken for one another?

The confusion between IoC and DI stems from the fact that Dependency Injection is a form of Inversion of Control. Think how a square is a rectangle, but a rectangle is not a square. It's the same thing here, DI is IoC, but IoC is not DI.

Let's revisit the Guy example. Does Guy have any say as to who his best friend is? Nope. Guy has no control over which friend he's going to ask for advice, the BestFriend is going to be passed in, and something else gets to decide what concrete instance of it to pass in. Guy has lost his control over what code to call into for the thinksItsGoodIdea method. A parent component is now in charge, which mean the control was inverted.

In frameworks like Spring, DI is used extensively as a method for implementing the plugin architecture and a form of IoC. You can choose between different components, and wire them all as you wish inside of Spring. Spring becomes responsible for a lot of the control flow decisions. Allowing you to plug in and out components.

Additional Readings


Saturday, June 11, 2016

Artificial Intelligence - Thoughts in Philosophy

Recently I've been reading more and more of prominent scientists and engineers that warns against the dangers of artificial intelligence. It got me wondering...

What is intelligence anyways?

When I dissect this to its minimal form, I feel like intelligence is simply the level at which one is able to change the physical world into a form that satisfies a certain need. Or at least to be able to conceive of a way to do so.

With this definition, we, as humans, definitely have a high level of intelligence, proven by how much we've managed to re-shape our world to our needs. Even a paralyzed man, with no means to physically alter the world, can still, if his mind is intact, conceive of ways to do so, Stephen Hawking is a good example.

So it could be said that intelligence is the level at which one can conceive of ways to physically alter the physical constructs of his environment.

Well then, could a computer ever be intelligent?

There's a missing piece of the puzzle here. You see, being able to conceive of ways to change your surroundings implies that you also have motives to do so. In fact, there would be no point to have this capacity, as without a need to be fulfilled, one would never exercise such capacity even if it had it.

You need an objective to decide what change must be made.

Without an objective, you'd be at best randomly conceiving ways to change things. There's things in our universe which seem to exhibit such properties: the wind, the planet's core, thunder, fire, etc. We tend to regard these as non intelligent phenomenon.

This means my definition is incomplete. Intelligence is the level at which one can conceive of ways to physically alter the physical constructs of his environment in the way he wants.

To want anything, one must have needs. Thus, you can not be fully intelligent if you don't have needs. Similarly, you can not be fully intelligent without the ability to conceive of ways to change the world.

Is that all?

Actually, no. I mentioned earlier that you needed not have the capacity to alter the world, simply to conceive of ways to do so, but, I don't think that's totally correct. To be truly qualified as intelligent, you'd need to be observably intelligent. Maybe plants have a hundred fold our capacity to conceive ways to change the world, and do possess needs of their own, but alas, with their limited ability to apply those conceptions, one would be hard pressed to ever qualify them as intelligent.

Finally I say, Intelligence is the level at which one can observably alter the physical constructs of his environment as to satisfy his needs.

Should we be worried about intelligent machines?

To create an intelligent machine, one would need to:

  1. Create a machine with needs.
  2. Create a machine that conceives of ways to fulfill needs.
  3. Create a machine that can physically apply preconceived ways to alter the environment.

With that perspective:

  • We should be very careful with #1.

The reason I say that is, if a machine had needs, chances are they would conflict with our own, and that's absolutely when you get in a dangerous spot. Really, we should avoid that one completely, it does not even provide us with any benefit.

  • We should be very careful combining #2 and #3.

This one is more subtle. Ideally, we'd want to tell the machine, this is what I need, and it would go off, think of a way to meet it, and go ahead and execute on it all by itself. That's when you get into those sci-fi movie tropes, of the machine that enslaves all humans because it figured out that was the best way to create world peace. So I'd say, we'd want to keep these two separate, so that we can audit all conceived ideas first, before giving it to another machine to execute.


In the end, I think that the scientists and engineers who are warning us of the potential dangers of AI are mostly right. Given we can and ever do create a machine with full observable intelligence, it would definitely have the potential to put us all at risk. Having said that, I think there's great opportunity for improving our lives if we could build independent machines that each had partial intelligence, and thus, it is probably worth it to keep research going towards these goals.