Monday, December 26, 2016

CPU v GPU (Deep learning \ AI)

Why bother about CPU v GPU? Because this will let you know how parallel processors making revolution in the Deep Learning \ AI world.

CPU or Central Processing Unit: is electronic microprocessor which is designed to do sequential processing of tasks/programs as described by von Neumann Architecture (1945). Neumann was working on Manhattan project at Los Alamos National Laboratory, he wanted to device a machine to do lot of calculations. So Neumann designed CPU architecture, obviously he was influenced by the great Alan Turing, British mathematician. 

The reason I am trying to give background of CPU is to understand the time its architecture was designed.

GPU or Graphics Processing Unit: many of us don’t know that GPU exist in almost all modern computer, mobile phones or any display units. It’s the work horse, does all the graphics related activities like calculating vectors\polygons or all activities involved in the graphics pipeline. In short to do math required to displaying video games\movies, images is responsibilities of GPU. 

Initially CPU is to do all graphics processing related activities, but the raise of the sizes of movies, video games, images let to the raise of GPU’s. Actually still CPU is central of a computer, but it just outsource the graphics processing work to GPU. GPU is capable of doing many times billions of calculations per second then CPU because of its enormous number of its cores. 

CPU v GPU 

“A CPU consists of a few cores optimized for sequential serial processing while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously” NVidia


CPUs & GPUs both have fundamentally different design philosophies. We cannot compare apple to orange. Each excel at their own respective domain. Here is the few basic difference between them:

CPU
GPU
Multiple cores
Hundreds or thousands of Cores
general purpose processors to handle all kind of activities
special purpose processors to handle polygon based 3d graphics
serial portions of the code run
parallel portions of the code run
have higher operating frequency, higher of number of registers
- lower operating frequency, lower number of registers
file handling, branching, serial tasks
better in math computer like polygons, vector math’s, transformation

GPUs are in general targeted for gaming.
Sequential
Parallel

Deep learning:

With rapid advancement of GPU’s processing power researchers and business started to use it in Deep learning \ AI   

Deep learning is the fastest-growing field in artificial intelligence, helping computers make sense of infinite amounts of data in the form of images, sound, and text” - nvidia

Deep learning is nothing but trying to solve complex problems by creating neural networks which works like human brain. It learns from the process it does and try’s evolve or get better as it does more and more operations. Basically try’s to become like one human brain.  

Today’s advanced deep neural networks use algorithms, big data, and the computational power of the GPU to change this dynamic. Machines are now able to learn at a speed, accuracy, and scale that are driving true artificial intelligence.

Deep learning is used in the research community and in industry to help solve many complex problems like genetic simulation, molecular biology simulation etc.

Parallel Processors Framework

Following API’s helps programmers to manage parallelism and data delivery in massively parallel processors (GPU’s): 

OpenCL by Apple
DirectCompute by Microsoft
CUDA by NVidia

By using any above frameworks you can write programs to analyze or solve problems like complex math calculation, molecular simulation, genetics or even atom level simulations.


Sunday, December 4, 2016

What is IoT?



What is IoT. Internet of things or devices connected to internet.

In simple words the devices which are connected to internet, interact with each other, collect and exchange data. A device is connected to internet can be controlled or operated by remote control room.

History:

Necessity is the mother of invention, Retail industry were struggling to track the items, inventory, to meet demand and supply. Researchers at MIT found a way to improve Procter and Gamble business by linking RFID info to the internet, and also with the invention of RFID, internet led to the era of IoT.

Term IoT was first coined by Peter T Lewis in 1985 speech to FCC, he said “IoT is the integration of people, processes and technology with connectable devices”.
Kevin Ashton is one of key member in development of IoT, while working at MITs autoID lab developed technique to linking RFID information to the internet.  Ashton wrote:
“RFID and sensor technology enable computers to observe, identify and understand the world—without the limitations of human-entered data”

Though its simple automation solution at that time but not simple anymore now.


Bottom line of IoT are automation, no human intervention, less errors, improve productivity, machine to machine communications, cost savings.

Things which make IoT to work: (Sensor | IP number | Internet | Cloud | You)


Actually its convergence of multiple technologies makes IoT to work. But here are the main components of IoT:
Sensor:a device that detects or measures a physical property and records, indicates, or otherwise responds to it” – wiki. It can any sensor heat, pressure, bio, touch, … if you don’t know, your (human) body is full of sensors!!
IP number: thanks to IPv6, without this it’s tough to implement IoT. IPv6 is based on 128 bit address so theoretically 2 ^ 128 addresses. Based on ipv6 we can assign a unique number to each device.
Internet:  another major component through this we transfer data or communicate.
Cloud: via internet we store the sensor data, process and analyze it
You: you or smart phone or machine will have the information about a device. You will decide what to do with that data.

Here is a hypothetical scenario of how above components work together:
You have installed a temp monitor Sensor at your home. Sensor detects that air temperature inside your home is above 85 deg F. Sensor has connected to internet, so it will send data to cloud. Your smartphone is hooked up to cloud data or registered to receive alert from cloud. So on cloud you have logic if data of air temperature rise above certain level you inform registered devices. So you get an alert notification. You can respond to that alert by saying to reduce the temperature.  In reverse order signal goes to cloud, cloud to sensor via internet and sensor reduces the temperature.

IoT is heavily used in Industrial automation, Health care, Power & Utilities and many other. You can even monitor your refrigerator too.

Create your own IoT device:


With little effort you can create your own IoT device. I have listed things you need to create a device.

Scripting/Programming language. (ex: Python, C)
You need to know a scripting\programming language. With very basics if else conditions you can create script to-do on and off functionality.  If you need here is the link to learn python.
https://wiki.python.org/moin/BeginnersGuide/Programmers

Basics of Electronics
You should know basics of electronics like voltage, current, resistors, capacitors and couple others. Here is the link if you need to learn or brush up.
https://www.clear.rice.edu/elec201/Book/basic_elec.html

Arduino or Raspberry pi
Both are often misunderstood about the usage. Here is basic difference of them:
Arduino is a microcontroller / basic computer, you can run just one program at a time. (Costs around 20$)
Raspberry pi is full-fledged computer with Linux OS. You can run multiple programs at a time. (Costs around 35$)

I suggest you start with Arduino, because it’s simple and easy to learn.

Download Arduino IDE: https://www.arduino.cc/en/Guide/HomePage
Buy Arduino basic controller
Follow: https://www.arduino.cc/en/Tutorial/HomePage






Wednesday, October 26, 2016

Tuesday, October 18, 2016

What is software configuration management, SCM, CM?

                              Software Configuration Management

History
What is SCM
Modules of SCM
Configuration Manager

History:
US dept. of defense maintain and operates tons of verities of military equipment’s. Obviously it’s tedious if you don’t have a system to track and maintain the items. So they approached Software Engineering Institute (CMU) to develop a better process to track, maintain, status etc. of military inventories. It’s SEI and other groups of industry experts developed CMMI/SCM. Actually SCM is part of CMMI. CMMI contains 22 process areas, all these process talks about how to improve SDLC life cycle from requirement to deployment (complete SDLC). CMMI model can be adopted to other non-software industry as well.  

Here onwards I specifically talk about SCM (for software / IT).
Basically what SEI and others they did was, gathered all of existing knowledge/experience of software development/any other industry. They gone through day to day activities of configuration management, pointed out main tasks, etc. Out of this knowledge/experience and forethoughts they developed a process called SCM.

Main idea behind the Process is to reuse the existing knowledge of a domain(software or any), refine, improve and follow it. (In other words, don’t re-invent the wheel)

Processes are key for success of any project because, every individual have different thoughts so it creates confusion. “Process brings order out chaos”.

What is SCM:
 “The purpose of Configuration Management (CM) is to establish and maintain the integrity of work products using configuration identification, configuration control, configuration status accounting, and configuration audits. “– CMMI.  I know its bit vague, since SCM can be adopted by other industry’s they defined it in generic term.

Simplified, the purpose of SCM is:
Identify all the documents (design, test plan …), source code, software’s/hardware related to a project.
          Control accidental release of software into prod/QA, prioritize the bugs …
            Status approved changes, revision history, change logs, change request records …
            Audit basically to check, whether your project actually follows CM process according to SCM standard.

SCM, is involved in entire SDLC. At first people think it’s just a source code control/repository but it’s beyond. It involves from requirement to deployment. Don’t believe me, here you go:

a          Document’s repository (requirements, designs, project plans, test plan, release plan, deployment plan…)
b          Source control/repository
c          Build management (branching / merging / baseline / backup the source code)
d          Release management
e          Deployment management

Typical SCM, in a project looks like this: A configuration manager he or she will be responsible for overall CM related activities like to create a CM plan, build plan, release plan, conduct CM audits and to spread the CM process knowledge across the team.

Agile development methodology has introduced the term called Continuous integration / Continuous deployment, basically they are fine-tuned version\variation or lean version of SCM process.

Govt. projects like space, army, medical… they fully adhere to SCM and they can’t do fast phased development like typical IT projects, because of product criticality like life and death.  So there is no one fit size for all, some will follow lean scm and few typical cmmi scm.

Modules of SCM:
Typical SCM modules occupy a book, but I try to explain briefly main modules of SCM are:
Document repository: SCM manages all the project related documents for ex: project plan, test plan … here process tracks versions/stages of docs like from draft to approve what changed, doc version numbers etc. usually docs will be stored separate doc repository for ex: share point and not in source control.

CM Plan: It’s a document which describes over all CM activities, like who is the CM, where is source code located and its path, docs repository path, audit plans, build/release activities. CM plan might have reference to build/release plans. Basically if any outsider looks into this doc he or she should able to figure out about project CM activities.

Source code management: Source control like TFS, SVN, Git are used to store the source code. It’s were all the project related code will exist. From here developers will check in or out.

Build systems: It’s a system / box where code will be pulled from source control and do build of a project every night or as soon as user checks in source code.

Change control: typically consist of product stake holders, PM, CM … to control the change request to project.

Release management: is process where you define how to push source code\binaries, bundle and deploy to Prod or QA. It also contains release notes, known bugs\issues.

Audits:  checks whether,  users are checking the code with proper comment\issue number, code backups are taken, process mentioned in the doc and followed are matching etc.


Configuration (Release) Manager:
He or She will be responsible for overall CM related activities inside a project. Its full time or part time job based on the project scale. I try to briefly point out the CM roles and responsibility:
a)    After project initiation CM activities will start
b)    Create CM plan, co coordinating with PM, Tech lead, project stake holders
c)    Over all in charge or admin of source control. CM decides who and what access will users will have. Only few of team members can have admin access to source control
d)    Creates build plan \ system. Plan will have at what frequency build will should happen, a unique id for the build, whether to push binaries to specific location or not and many other things
e)    Release management, package creation, release notes …
f)     Spreading the CM process awareness among the team members. By presentation, demos …

Summary:
You like it or hate it, but you can’t ignore SCM. Waterfall, Agile, Spiral … they all follow one way or other a CM process.



Wednesday, October 12, 2016

How web works - simplified



Internet, is network of computers, from outside it looks very complex but internally it works on few simple principles. Here principles I mean standards/protocols like HTTP, TCP/IP, html and also laws of electromagnetism (in simple, its law of light. it’s very important for internet, because all of the data will be travelling near to speed of light otherwise internet will be very slow. Trick here is whatever web request or response will be converted into binaries and send them via wave pattern inside optic cables)

Before I explain components of web. I just like to mention how standards or protocols helps to develop such complex systems. Standards like HTTP is nothing but a few set of rules on which two system will interact seamlessly because both the system know what is the rules/standard/language. For ex: when traffic signal light is red we stop, green go. Same way web browser and web server both have knowledge about HTTP and its language.

So basically when you click on a link on web browser, it talks to a remote web server and gets back the data what you have asked for and displays it on browser.

Following are the key components of the web, together they make internet work. Each one of them is big enough to accommodate a book or more. So I will try to explain each one briefly and hope easy to understand. Actually there are lot more components like DNS, OSI models, Firewall, and others involved in to make internet work but for us to understand how web works below items are good enough.

HTTP, Transport Layer (TCP/IP), Web Server, ISP, Browser, HTML5 (structure),
CSS3 (presentation), JavaScript (interactivity)

Browser: First thing first, browser is a software/window to internet through which user browse the web. How can it talk to remote server, get data like text, picture, video, and display? Answer is HTTP, html … Browser is a software/program built on standards/protocols like http, html, css, javascript.  For ex: when browser sees content like make me bold it render/display like “make me bold”, so it’s nothing rocket science. All the standards or rules of http, html… are already coded inside software to how to display text, make calls remote server, etc.

HTML, CSS, JavaScript: these are language of web. HTML is for structuring the data, CSS to present the data and JavaScript for interaction or behavior. Just html is enough for to display data but it will very dull and boring. To give life to web CSS, JS came to existence. When Tim (legend) invented the web he just thought to link the documents and access it over network (I am simplifying itJ). He had no idea one day web will explode so big like what we see today.

HTTP: hypertext transport protocol, is an application protocol. Web browser/client sends http request to web server and web server returns http response and content as html. HTTP has specified different way to request and response. Here are the http methods : HEAD, GET, POST, PUT, DELETE, TRACE … don’t worry if you don’t get it, these are verbs or actions clients request to server for ex: HEAD means header of http request/response, GET means get a resource from web server, POST means get or create a list of resource in the server. So this is how server know how to respond if any request comes from client.

TCP/IP: Transporting of http bundle to web server involves lot of behind the scene actors like OSI models ( its seven layer of network,  it defines how data will travel from your computer to Ethernet to cable to computer next to you or other side of the world). But in that layer TCP/IP is important for us to understand how http request will be get pocketed, addressed, transported and received.  This layer is very important because activities like 3 way handshake and other connection establishment all this will happen less then fraction of second. (Thanks to speed of lightJ)

ISP: is network of thousands of miles fiber optic cable, spread across intercontinental or via satellite. Its organization provides internet access. Through which all your web request and response will travel.

Web Server: As name says its serve the web request. The server is just a software not the computer. Like web browser, server is also a program coded to understand HTTP methods and respond accordingly to http request. Basically server will respond with documents, images, videos, etc. Web server may be right next door or across ocean.


Summary: Web/internet is very exciting place. Its part and parcel of our life now. Its topic of worth many books, but I just thought briefly dot the maps and present brief idea about web to users. I might have missed to explain few points or explain in more detail, it’s just because to simplify. Thanks!