Sunday, December 15, 2013

Scientific Computing: An Intro

Scientific Computing is nothing but mathematical and informatical basis of numerical simulation. It can be used for reconstruction or prediction of phenomena and processes, esp. from science and engineering, on supercomputers
It is often known as the third-way to obtain knowledge apart from theory and experiment. It is 

transdisciplinary: mathematics + informatics + field of application.

Objectives may include,
  • Reconstruct and understand known scenarios (natural disasters)
  • Optimize known scenarios (technical processes)
  • Predict unknown scenarios (like the weather)

One would wonder why would we need Numerical Analysis ? Well, there can be many possible reasons for it,

1. Since experiments are sometimes impossible like,
      - Predicting the life cycle of galaxies
galaxy

      - Weather forecast
      - Predicting stock market, or predicting           economic effects

2. Since experiments can be unwelcome sometimes, these would include
      - Tests of nuclear weapons
      - Stability of buildings
stability test

      - propagation of harmful substances

3. Sometimes experiments can be costly
      - Car crash
Crash test

      - Aerodynamics
      - Analysis & study of proteins

What's interesting is that you have people master one particular tool, and then work on that to solve some complex problem  of their discipline. Let's look at some particularly famous tools that most of the researchers use for this.

Mathematica logo.

Mathematica is a computational software used in many disciplines such as scientific, engineering etc, developed by Wolfram Research. It has all the features that MATLAB includes but can also be extends to 2D, 3D processing, parallel           programming.


Matlab logo.

MATLAB short for MATrix LABoratory is a numerical computing environmnt, sometimes also called fourth-generation programming language. It is allows plotting of functions and data, implementation of
algorithm, creation of UI, and interfacing with other languages, including C, C++, Java and Fortran.


With all of the technological innovation happening today, this field of computation will only be more  helpful as complex problems become much more complex to solve. It will be great to see what all problems can be solved with the ongoing technological advancement.


Monday, December 9, 2013

Computer Graphics: It can do Miracles!

Ever wondered how is it, that we still get the same graphics as we used to get a few years back on the gaming console, now on our mobile devices. Apple keeps increasing its display resolution every year and gives these stunning display quality.

In this blog I will be giving a brief intro about computer graphics and its applications. Will not be getting in too detail about anything but I encourage you to read the links provided. So coming back to how Apple manages to do that, well it's all about pixels and making it shine without drinking a lot of power. There are many way to do that and I will leave that for you to understand if you are interested in knowing the different techniques.
This link sums up pretty much everything neatly giving you a brief history into the past as well.

Some different types of images that we have seen in our day-to-day life which are the core essential of computer graphics are

Pixel-art
2D images are use in applications that were originally developed from traditional drawing and printing technologies.

Pixel-art
A large form of digital art being pixel art is crated through the use of raster graphics software which images are edited on pixel level



vector graphics
Vector graphics are complementary to raster graphics. It consists in encoding information about shapes & colors that comprise the image, which can allow for flexible rendering.



3D graphics compared with 2D are graphics that use 3 dimensional representation of data. Despite many differences they still use the same algorithms. 3D graphics are the same as 3D models.



Animated pic
Computer animation is the art of creating and moving images via the use of computers. To create the illusion of animation, an image is displayed then quickly replaced by a new image that is similar to the previous, but slightly shifted. This is somewhat similar to the illusion of movement in television and motion pictures. 



A list of the different styling techniques can be found at the following link along with their brief intro. A very good example to understand a few things about graphic designing we see in games and other animations is to see this video which shows how real-time planet rendering and lighting is done in OpenGL



One would always feel why do we need computer graphics ?

   The importance lies in its applications. In engineering applications like automotive and aerospace, the ability to quickly visualize newly developed shapes. Before the advent of computer graphics, designers built expensive prototypes and time-consuming clay models. Now, they do it interactively with the help of computers.

Medical imaging is another application where computer graphics has proven to be very valuable. Examples include 3D Xrays

Computer graphics has also expanded the boundaries for art and entertainment. Movies such as Jurassic Park, Avatar are good examples which make use of computer graphics to test the bounds of imagination.
Virtual reality which is fast becoming an indispensable tool in education. Flight simulators are used to train the pilot for extreme conditions and also used to train novice surgeons without endangering patients. These wouldn't have been possible without computer graphics.

 And as the industry develops today, one's imagination is the limit to things that can be developed. 

Saturday, November 30, 2013

Networking: How does the Internet work :/

Have you ever while surfing on Facebook, thought about how does the Internet actually work ? how am i able to speak with people from across the world within seconds ? or how is it that you give it a name and it fetches you the page so quickly ? We say that the computer understands everything in 0s and 1s..but then, how does the computer understand where to go when we type www.google.com 

The answer to all these questions is just 2 words Computer Networking. Networking is such a huge topic to talk about, it includes protocols, DNS, DHCP, topologies, different networking standards and so on (I'm sure you must be knowing about most of them I mentioned in the list). For me, it always was intriguing to know how the internet actually functions and to take that interest forward I learnt about different routing protocols and that is what I'll be talking about in this post.

Routing protocols are the most vital aspect of networking because, when a packet is sent out of your machine in the internet, it needs to select a route for that packet to go, making the correct and the shortest route available is the job of these routing protocol. Different protocols are available based on the network topologies, and are used by service providers, like RIP(Routing Information Protocol), OSPF (Open Shortest Path First), BGP (Border Gateway Protocol). Just remember one thing the core infrastructure of internet (These are a select few routers in the World which handle the routes from one county to another across the world, as of 2013 there are 6 tier 1 provides in the telecommunication industry. Level 3 communication, Century Link, Vodafone, Sprint, At&T, Verizon) has BGP running as the networking protocol..Since, it is a bit complex than other routing protocols I will not be getting into it.

Another mechanism that is worth knowing is MPLS (Multi Protocol Label Switching), this is the most important forwarding mechanism that is used along with IP..Service Providers are changing their infrastructure to include MPLS...with MPLS Traffic Engineering capabilities it makes it much more profitable to use MPLS.

Since routing capabilities are needed by all devices today given that it is required by all devices to connect to the internet or some other network for communication...we have to include routing capabilities in devices that one cannot even think about from mirrors to poles alongside the road...

One such protocol I came across recently was DFF (Depth First Forwarding)..This protocol was proposed to be used on low-power devices where the network topology changes frequently. You might think, why do we need more protocols if we are already using the ones mentioned above, for years now..Well, for that you need to understand that when a routing protocol runs on a system it uses a lot of power and CPU cycles to keep the routes updated. In low powered devices due to the power constraints we need another protocol based on this specification..

One such protocol used or this use-case is DFF and the good thing about this protocol is, in conventional routing protocols if a packet does not reach a destination it drops the packet and resends it later, however, in this, before dropping the packet and declaring the neighbor as not reachable, it tries all the depth-first neighbors. This way it checks all the possible paths to reach a particular network instead of just one. Look at the figure below to get a better understanding...

 
DFF Forwarding
Consider if a packet is being sent from node 1 to node 4, in this case when node 3 sends the packet to node 4 but let us assume that the ACK from node 4 is lost then in that case, node 3 does not drop the packet and re-tries sending it again..rather it gives the packet back to where it came from and that in turn will try all its neighbor until it finally decides that the node is not reachable..The mechanism is not as simple as illustrated there are many other things to be taken care of such as Duplicate packets, Loops etc.. I will not get involved into all those in this post..But if you are curious then go ahead and read RFC 6971. There is a lot of research going in this field and many other protocols and I feel good to be a part of them.


If we need all of it shown in the video below, networking is a very crucial part of it. 



Stay tuned for more!

Thursday, November 28, 2013

Artificial Intelligence: The Best is Yet to come...

Wouldn't it be great to have all your work done by robos ? Yes indeed, looks like a fictional movie scene but researchers around the world are working hard on it to make that dream possible one day..and what I feel is that we are not very far from this given the invention in recent times. Well by now you would have figured out that I'm talking about Artificial Intelligence.

Artificial Intelligence also defined as the "study and design intelligent agents", where an intelligent agent perceives its environment and takes actions that maximize its chances of success. In this blog-post I want to show you guys some amazingg inventions that makes me believe that we are not too far from achieving the impossible. And I am sure after reading this, you will also feel the same.

 To start off with IBM has made a artificial intelligence program named Watson, which accesses roughly 200 million pages of information and is able to understand natural language and answer questions; the idea was that Watson's encyclopedic knowledge of medical conditions could aid a human expert in diagnosing illness, as well as contributing computer expertise elsewhere. IBM later announced that it could be used for a wide-range of call center, technical support and technical sales applications. Look at this video of how Watson beats the reigning champion.


Intelligent Transportation
We all know about Google's driver-less car which has been in and around the Bay Area for quite a bit of time now..Another major invention rather cheap is that of a computer scientist from Israel modified his Audi A7 by adding a camera and artificial-intelligence software, enabling the car to drive the 65 Km highway between Jerusalem and Tel Aviv.  The technique is different from Google in the sense that it uses cheaper methods of achieving its goal unlike Google's LIDAR which is expensive to deploy on production vehicles as sighted by Elon Musk.

Emotional Computing
Currently, at a preschool near University of California, San Deigo, a child-sized robot named Rubi plays with children. It listens to them, speaks to them and understands their facial expressions. Farther down the road, it is likely that applications will know exactly how people are reacting as the conversion progresses, a step well beyond Siri.

Robotics
A race is already under way to build robots that can walk, open doors, climb ladders and generally replace humans in hazardous situations. In November, DARPA had held a $2 million contest to build a robot that could take the place of humans in battle fields. For home, companies are designing robots that are more sophisticated than today's vacuum-cleaner robots. Hoaloha Robotics recently said, it planned to build robots for elder care, an idea that, if successful, might make it possible for more of the aging population to live independently.


Given all of this happening, I am desperately waiting to see some ground-breaking innovation that will blow our minds away!

Saturday, November 16, 2013

Computer Science: History simplified...

     Computer science, it sounds like the scientific study of computers, doesn't it? But Edgar Dijkstra famously said:  "Computer science is no more about computers than astronomy is about telescopes." Also, don't scientists study nature, not machines?
     So what is Computer Science about? In a word: algorithms; Obsessively Inventing, testing, debugging, and improving algorithms. The algorithms might be controlling the brain of a robot, encrypting a massive stock trade, simulating an ecosystem, chasing an avatar through a virtual swamp, attacking a drug lord's computer, or searching a network like the animations above.

     When I started to know about its history, it goes long back in age of B.C and I personally feel good to be
history of computer
a part of this discipline which has, and is still making a huge change in our day-to-day life. So, I thought I would present you with something very concise ranging from 1900s - today. Hope you like it!

Before 1900
It started with abacus and later another device called Antikythera mechanism which was found at some island in Greece, then came later somewhere in (1550-1617) the Napier's rod off course by Mr.Napier which simplified the task of multiplication. Of course there were a lot of other inventions which were done before we had the modern punched card invented in 1929 by Herman Hollerith

1900 - 1939 The Rise of Mathematics
Computer science was always about dealing with calculating numbers and doing as many calculations at one time.  This was the era when a lot was discovered in mathematics and also the very famous Turing machine was invented.

1940s
The second world war brought the era of digital computers and a lot of other inventions, the concepts of which are also used today ranging from ENIAC, EDVAC, EDSAC and also the magnetic core memory was invented..Some great ciphers like Enigma, Purple and many other were seen.

1950s
This is the era which defined the modern computer science and it's concepts. The first ever "bug" was discovered in 1947. The first compiler, FORTRAN was developed in 1957. We had the Dijkstra shortest path algorithm, the Turing Test.

1960s
Computer Science was formally defined as a discipline in this era. The first ever computer science department was at Purdue University in 1962 (I'm sure most of the computer science students like me would have no clue about it..). Also, the first ever Ph. D. from a computer science department was Richard Wexelblat, at the University of Pennsylvania, in December 1965.
Operating systems saw some advances. BASIC was developed. Computer mouse was invented in 1968. The first ever microprocessor was designed in 1969 at Intel. ARPanet was developed,  a precursor to the Internet.

1970s
Some great new inventions which most people are aware of today was done in this era. The theory of database saw major advances like relational database. Unix was developed and also the C language. The era was a major player in the invention today with some other languages being developed like, Pascal, the RISC architecture, NP-complete problems, supercomputers, Usenet, RSA was invented.

1980s and 1990s
It was the birth of Apple Computers, Computer viruses, Parallel computers came into development, Quantum Computing, Biological Computing and so many many more...with time computers kept getting smaller and smaller; with the birth of nano-technology.

We have reached the space and back. Today we can make burgers without killing animals and print guns in 3D. With such ground-breaking research only time will show us what is coming next. I'm definitely excited to see what's coming our way.


Let me know what you guys feel !




Sunday, November 10, 2013

File Sharing: What's new in this ?


Imagine what would the world be like today if you didn't have access to files (music, picture, movies, etc...) Would you wait until days when it gets to you by a man travelling cross country ? Or given today will you carry all your favorite media along in a removable device (like CD's) wherever you go. Well, certainly not. 

We today are blessed by the technological innovation that has changed our life's by leaps and bounds. Today we find almost everything available online no matter where we are. Technology has made it possible for us to achieve all this. With more and more data moving to cloud services we are no longer in control of our data. As the concept of decentralized web is gaining traction: more and more people are thinking of ways to change.

This cause for this is obvious: the number of security flaws and privacy disasters that were made public has spiked recently. In April 2011, Dropbox changed its security terms of service to include that Dropbox has full access to user data. Similarly, Facebook also changed its privacy terms and conditions year on year, and from being a private communication platform to the one that shares users information with advertisers and business partners thereby limiting the user control to data.

Decentralized Applications:

The most popular amongst this is Diaspora. This project was started by 4 young programmers from NYC Courant Institute in 2010, and raised a record $200,000 from Kickstarter. It is touted as the "Facebook-killer" which allows users to have control of their data-security. This can be achieved by each user having their own Diaspora node. This essentially means allowing the users to have their Facebook server at home or ( anywhere else they prefer ). The Diaspora nodes are able to interact with each other to form one distributed social network. Furthermore, instead of users having to log into one single server, they can choose one of the many servers administered by different entities. This way they can decide whom to trust with their data and no entity has full access to it. (source)

Similarly, there are many other applications that have been developed recently, one such is buddycloud. The way this works is somewhat similar to Diaspora, but is working with W3C, Mozilla Firefox and XSF to build a foundation so that soon all products have a new social layer on top. This can be understood by a simple diagram,
buddycloud

This way each user can select which websites to share data with. Isn't that cool, you get to choose with whom to share your data.

Decentralized Storage.
With the issues in security over storing data in public servers who have access to all the clients data, ownCloud is being developed as a replacement to Dropbox. It allows users to make their own cloud and access their files from all the devices.
Likewise, the Locker project allows the user to set-up their own hosted server. This is achieved by installing their software on the client's server and providing features similar to what Dropbox does.

It is exciting to see that so many people feel that things have to change and come up with ideas and projects to make it happen. I'm sure we will see many exciting things in the future which would change the way we access and store our personal data over the public internet. 

Saturday, November 2, 2013

Data Structures: What to know about them


All of you must have asked this question to yourself in college ( at least for the Computer Science students while studying Data Structures ), "Will I really use all of these in my professional life" or "Where will I actually use them". And the answer to this becomes clear as we are more involved in developing projects/applications where in the computation time (i.e. time required for the completion of some task ) is much more important than anything else. You do not want to be waiting all-day long for a certain result to be shown to you until you proceed further. 

Well, this is where the expertise about data structures and algorithm comes in picture. Data Structure is an integral part of any computer science problem. It specifies a way of storing and organizing data in a computer memory so that it can be used efficiently. With this in mind; we have different requirements for different applications. In some applications we need data retrieval to be fast as compared to storing, but in others we would need data to be stored in sorted order and everytime we retrieve the smallest of them. This way, depending on the requirement of the application/problem in hand, we need to select the data structure accordingly.

Different data structures have different approaches to storing data and retrieving it from the memory. Some algorithms store data faster than retrieval like, Linkedlist, whereas for other it's the opposite. Data Structures have time and space complexities associated which estimates the amount of time it will take to do the required operation. Because, you cannot deploy your application and then check for how much time it takes. For these, mathematical expressions such as Fourier and discrete transform along with various other mathematical expressions are used to estimate such things.  


Well, there are loads of data structures to use from, when solving a problem, ranging from heaps, hash tables, trees, linked list, Queues, etc. To get the complete list of data structures you can see this wiki.
When writing about this blog-post, I thought to write about a data structure I came across recently, called Trie which is widely used in most applications and many of us are not even aware of it.


Trie is used in most of the application we use in our daily life.

It is used in search engines for storing the occurrence of the word in a particular URL, used in routers to match an IP address in a routing table. 
It has many advantages over other data structures in terms of searching, inserting and deleting, all of which can be found here 


It is a fairly simple data structure to understand but is used in all complex applications and yet most of us are not aware of it. I would highly recommend watching this video from IIT-Delhi. The concepts and all the problems of it are explained with solutions and reasoning. Also, this link has some good implementation details if you guys are interested in.
Well ,be it for technical interviews or problem solving my best bet would be to at least know what this data structure does.





Friday, October 25, 2013

Hacked...For You!

Hackers..I find these people really really cool. Man they can do all sorts of stuff which otherwise are not permitted. Well, the definition for hacking says. "Hacking is the practice of modifying the features of a system, in order to accomplish a goal outside of the creator's original purpose." The person who is constantly engaging in hacking activities is called a 'Hacker'. It wouldn't come as a surprise to you that there are indeed 2 types of hackers.
  • White hat hackers (good ones)
  • Black hat hackers (bad ones, also known crackers)hacker

While white hat hacking is a hobby for some, others provide their services for a fee. Thus, a white hat hacker may work as a consultant or be a permanent employee on a company's payroll. A good many white hat hackers are former black hat hackers.
You must have heard of this hack when the new iOS 7 was released with the finger-printing technology. A small group of people actually hacked this feature in a couple of days, all this while being in a hotel room (Being hotel room is strange but the fact is that they were indeed there in a room in Miami )
FBI is always on the look-out for such cyber-criminals and the current list for the most-wanted can be seen here.

To combat attacks from such hackers, there are multiple organizations that organize mock-up days, competitions to expose zero-day vulnerabilities. One such incident happened recently when the White hat hackers exposed flaws of the US Stock market.

To market their product really well, companies also organize such zero-day competitions and offer huge cash prize to people who expose vulnerabilities in their product.One done by Google can be found here, another competition which is the most well-renowned (like Angel Hackathon) is Pwn2Own, offering prizes from $10,000 - $100,000.

I personally feel is that security experts are really the game changers in any product. No matter how awesome your product is, if it can be easily bypassed then it can severely affect the product.

Stay tuned for some more technical stuff!


Sunday, October 13, 2013

OpenSource: Who invented this Nobel idea ?

     I wouldn't have come across "Open-source" hadn't I been studying Computer Science or for that sake technology. My experience with Open-Source came when I was first introduced to Linux Operating system. Well, I was flabbergasted by this totally and astonished to know that there were so many people who were contributing so much to open source software. 

    So, to know about it I decided to write something about it. It all started in the 1980s when Richard Stallman, created the Free Software foundation to support his idea (that code should be made available for the software you use or pay for). According to Stallman, rejecting proprietary software and promoting free software should be the ultimate goal. He thought this would promote rather than hinder the progression of technology. (and I must agree, he was totally right then..)




 
    With this began the era of free software which became to be known as Open Source Foundation. And if you go by the numbers today, mostly all of the software which are used in the industry are open-source. The perfect example of this would be Android itself, which is being used of millions of people today. As a matter of fact, all server used in data centers use some distribution of Linux which is suited for their application. Being a software engineer myself, we often use Open-source products over proprietary ones.
 
    Open-source does have many advantages which are not available in proprietary software. The users of Open-Source software are free to modify the software as per ones choice and requirements. There is a open community for every software being developed under this license where you can suggest changes/bugs and even modify them. The whole idea behind using an open-source software is flexibility and great technical support for the product. 

    One would think who gets the time to do anything for free for people. Well, the contributors of such software applications want to contribute to this enormous community because, it may have helped them at some point and that they want to give some back to them/they believe in the same ideology like Stallman (if not for them, I definitely feel so..). Stallman believed, "proprietary software is wasteful duplication of system programming, and that can be used instead into advancing the state of the art" . 

    The importance of open-source software and its reach can be felt, when we see executives from companies like Microsoft, one of the pioneers of proprietary software business quoting in 2001, "open-source is an intellectual property destroyer. I can't imagine something that could be worse than this for the soft-ware business and the intellectual-property business." And later making official open-source presence on the Internet. The list of all open-source software by Microsoft can be found here. (the list is hugeeeee)

    Finally, I would conclude with a quote from one of our professors in San Jose State, Joel West, "While social change may occur as an unintended by-product of technological change, advocates of new technologies often have promoted them as instruments of positive social change." This explains much of the philosophy that free source movement is alive. To know about various open-source licenses, you might want to visit this link, and to know what are the different software available check this out.

Stay tuned! 

Friday, October 11, 2013

Agile Methodology: For nOObies

     Project management is a very important facet in any software development. We have many different approaches to achieve the desired results in SD. Since I was myself a novice in understanding this terminology I thought I would delve deeper into it and make some sense out of it. And hence, this blog-post

So why do we need a Software development framework ?
Wouldn't you like your things to be kept in a systematic way yet organized way ? Likewise, in software development right, from SDLC, the idea is "to pursue software development information systems in a very deliberate, structural and methodical way, requiring each stage of the life cycle right from the inception of the idea to the delivery of the final system, to be carried out sequentially and rigidly".
Well, comparing it at an individual level with maintaining stuffs (being organized, clean..Blah blah...Don't we hear this so much from our seniors) it does make sense right ? 

What is it?
Software development as a  framework, is used to structure, plan and control the process of developing an information-system this includes the pre-definition of specific artifact and deliverable that are created and completed by a project team to develop or maintain an application. Several approaches typically used by people for software development today are viz., 
  • Waterfall model
    Agile development
  • Spiral model 
  • Incremental 
  • Prototyping 
  • Rapid Application development (RAD)
Don't you think if the objective of all of them is to achieve the same result, then why the hell on earth do we need so many models (Grrrr...) Don't worry :) you will soon realize that like I did.
Since, it is not necessary that all the projects have similar requirements, likewise different framework also have their own strengths and weaknesses. Let's put it this way, for different project requirements we have different frameworks.

   To give a brief summary of the past, Software development in the 1990s was shaped by 2 major influences: internally, object-oriented programming replaced procedural-programming as favored by some experts; externally, the rise of the internet and the dot-com boom emphasized speed-t-market and company-growth as competitive factors. Rapidly changing requirements demanded shorter life-cycle , and were often incompatible with traditional methods of software development.
Didn't I assure, that you will soon realize the need for different software development frameworks!

What is Agile ?
Agile is a group of software development methods used based on iterative & incremental development. This methodology provides opportunities to assess the direction of a project throughout the development life-cycle. This is achieved through regular cadences of work, known as sprints or iterations, at the end of which teams must present a potentially shippable product increment. By focusing on the repetition of abbreviated work cycles as well as the functional product they yield, agile methodology is described as “iterative” and “incremental.” In waterfall, development teams only have one chance to get each aspect of a project right. In an agile paradigm, every aspect of development — requirements, design, etc. — is continually revisited throughout the life-cycle. When a team stops and re-evaluates the direction of a project every two weeks, there's always time to steer it in another direction.

The results of this “inspect-and-adapt” approach to development greatly reduce both development costs and time to market. Because teams can develop software at the same time they're gathering requirements, the phenomenon known as “analysis paralysis” is less likely to impede a team from making progress. And because a team's work cycle is limited to two weeks, it gives stakeholders recurring opportunities to calibrate releases for success in the real world. Agile development methodology helps companies build the right product. Instead of committing to market a piece of software that hasn't even been written yet, agile empowers teams to continuously re-plan their release to optimize its value throughout development, allowing them to be as competitive as possible in the marketplace. Development using an agile methodology preserves a product's critical market relevance and ensures a team's work doesn't wind up on a shelf, never released.

Well that's about it from Agile methodology. But you also must have heard a lot about Scrum. 

Like in SDLC we have many different methods. Likewise, for Agile we have many different methods:
  • Scrum
  • Extreme Programming 
  • Adaptive software development (ASD) 
  • Dynamic system development method (DSDM)

What is Scrum ?
Scrum is the most popular way of introducing Agility due to its simplicity and flexibility. Because of this popularity, many organizations claim to be “doing Scrum”, but aren't doing anything close to scrum's actual reference. Scrum emphasizes empirical feedback, team self-management, and striving to build properly tested product increments within short iterations. Doing Scrum as it's actually defined usually comes into conflict with existing habits at established non-Agile organizations.

Scrum has only three roles: Product Owner, Team, and Scrum Master. These are described in detail by the scrum training series. The responsibilities of the traditional project manager role are split up among these three Scrum roles. Scrum has five meetings: Backlog Grooming (aka Backlog Refinement)Sprint Planning, Daily Scrum (aka 15-minute standup), the Sprint review Meeting, and the Sprint Retrospective Meeting

To know more about scum many books and classes are available from a variety of competing sources of varying accuracy and quality. One place to start would be the Scrum Training Series, which uses an entertaining approach to cover the most popular way of introducing Agile to teams. You can also download the 6-page illustrated Scrum Reference Card.

I hope after reading my blog you never make a (what's that huh ? expression) , hearing/talking about Scum or Agile to anyone. 

Stay tuned for more!

Friday, September 20, 2013

LinkedIn and Branding: The new Lingo in Professionalism


With over 175 million professional subscribers and still counting, LinkedIn has become the new platform for branding and selling. From individual seeking jobs to recruiters wanting them, LinkedIn is one stop shop for all. (It's like Walmart for the professional world)

There was a time when we had to get in touch with recruiters to apply for jobs, seek references from friends and family to get in touch with some of them; but with LinkedIn things have become literally opposite. Now, if recruiters find what they require in you they will catch hold of you, no matter where you are. 

I personally feel Professionalism is all about marketing yourself (No matter what you do). The way you present yourself to the world reflects a lot about your character and your worth. Our Social status reflects a lot about our professional status as well. Like, I mentioned in my previous blog, that an employer didn't select a suitable candidate for the job just because he found some not so decent picture of his on facebook. (Well, this is just one example of how social worlds play an important role in portraying one's professional character.)

With LinkedIn advancing in the service it offers, there is whole lot of stuff that people are not at all aware about. 

So, some of the tips I would like to share for people out there 


  1. Connect with like-minded people – This way you will not only evolve socially but professionally as well.
  2. Participate in Discussions – This way you know what's being spoken about in the world today in your area of interest. You also get to connect with more people. The more people you have the more credibility and authority u have in your area of expertise.  Similarly, the more connections you have the more people u can get your profile in front of. 
  3. Knowing about company – It is as important to know about them as it for them to know about you. LinkedIn helps companies to get closer to their employees and also postings are posted n the company's group. This way it becomes easier for employees as well seeing jobs in a particular company.
  4. Re-factor you profile - Use all the tools provided by LinkdeIn such as "SlideShare" "Blog Link" to make your profile more competent with others...and many more provided here.
I feel, LinkedIn is just one piece of the puzzle. You'll want to supplement LinkedIn with profile on faceook, in addition to some technical blogs. Also, it is important that your LinkedIn profile should be consistent with your social profile. 

The best part is LinkedIn requires less upkeep, but to leverage it to the best of your benefit, you will need to invest some initial time in it.

Friday, September 13, 2013

QR CODES: What are those Fragmented chessboards structures you see :/ ?


     Imagine being at a party and having one of these business cards....(Wouldn't people be flabbergasted looking at you and wanting one of those...I would definitely be one of them)
  
MyBusinesscard


     Quick-response codes, better known as QR codes are an innovation by the Japanese.Unlike the traditional bar-code design that comprises of a strip of horizontal lines, the QR code is like abstract art (almost like the chessboard that’s in a fragmented format). So, incase you are thinking about what exactly are these abstract art squares, I have an answer for you…

      QR codes are small symbols that you might find on various retail products on supermarket shelves. It is basically a link, a web URL that helps you track information about the product on any device that has a camera. It simply requires you to install a QR code reader on any of your smartphones, which will then decode the design image into readable understandable information for the buyer. The QR code is more meaningful than conventional ones as it can store much more data in them.


However this is not it, the QR codes are contributing in a big way to bridge the gap between the consumer and the seller of the product.

      The very nature of this technology of being so easy to operate and access information almost instantly (hence the name Quick Response), and most importantly free has open doors to many benefits for the seller and the consumer.

      Companies can save on money on advertising by implementing the usage of the QR codes. There is this interesting feature with which one can track a QR code. This way companies can track the number of people actually using the QR code and change their marketing strategy accordingly.
I personally feel that with such an inexpensive way of tracking, companies can focus more on marketing their products rather than saving up on costs.

      Also the usage is not just limited to detecting information about the products but can be used as a wonderful medium for advertisements, billboards, business cards and so much more.

      Although many of us haven’t experienced this wonderful technology but for me being an avid Tech- Knowledge fan really appreciate this piece of innovation.

      So for all you people who want to get a little more tech savvy…
My quick advice:
1) Install the QR code reader in your smart phones
2)Polish your photography skills



And you are all set to hit the supermarket, Scanning....