Working with management to insure the project’s success!

One of the most critical aspects of any project is that of working with management to insure the project’s success. With this come many concerns that must be addressed. The two most important factors that we will address in this discussion is that of expectations and communication.

The first topic that we will discuss is that of expectations. In order to manage the expectations of the managers within Kucera Clothiers the project manager overseeing the project must first discover and fully understand the managers within the organizations expectations. The PM must then make sure to inform the correct management of the risks that are involved with the project and insure them that those risks are being addressed. One of the best ways to insure that management is happy is by first, not promising anything that you are not sure that you can deliver. The second thing is to make sure that the team exceeds all expectations after the team has met all requirements for the project, in short, “under-promise and over-deliver”. (Bool, 2005)

Now that we have met and exceeded all expectations and managed the expectations of the management staff we must control and deliver a product that all staff members are excited about using. In order to accomplish this, the project manager should first involve management during the design, implementation, and training of the staff. This will provide the management involved a sense of ownership in the project and pride that will be needed to fully support the project going forward. The PM will want to also encourage upper management to be enthusiastic about the change and what it can provide the organization and the lower management’s staff. Next the PM will want to encourage lower management to sell the idea to their staff.

In conclusion, communication is the key to addressing and delivering the expectations of the management staff. Meeting and exceeding those expectations while at the same time doing so with confidence so as to not encourage questions is the key to delivering a new technology into an existing environment.

Bool, H. (2005, December 16). Change Management and Expectations. Retrieved November 7, 2010, from http://ezinearticles.com: http://ezinearticles.com/?Change-Management-and-Expectations&id=114517

Virus or Worm

I chose to try Spybot’s S & D. after the install it started to run. This first step it performed was a registry backup. This is a precaution it takes, as many of the spyware it removes is deeply embedded in the registry and any time you edit and remove items from the registry while not using the uninstall that came with the application you should back up the registry beforehand. The next thing it performed was an update to the application. Next, it is recommended to run the Immunize peace of the application. This will help prevent hooks from spyware to be able to hook into applications such as IE. Lastly I ran the search for problems peace. The result were that there were 3 non-default Windows security settings. These are settings I have changed and I do not want Spybot to change back, so I have uncheck these boxes. The rest of the items that shows up are Tracking Cookies. Tracking cookies are use primarily by online advertisers to keep information about the kinds of things you like to click on while browsing the internet. This information is used for behavioral targeting when you go to the next site that contains ads that that company is responsible for. That way the company can place an ad in front of you that you are more likely to have interest in. This is beneficial to both the end user and the ad company. You are not bombarded with ads that you have no interest in and in turn the ad company increases the effectiveness of their advertizing. This is what my company does. I am a Sr. Network Windows Admin for Adknowledge. Bellow is a screen shot of the results.

The next topic we will discuss is the difference between a virus and a worm. A virus is a piece of code that attaches itself to a file and is then replicated to other files and replicates itself on a system, quite often disabling or at least really impacting the systems overall performance. A worm is very similar in that it is a piece of code that replicates, however a worm tries find vulnerabilities on the network and enter into other systems via those vulnerabilities. Once on that system is searches the network again trying to find a method it can spread. Worm can cause both system performance issues and network performance issues.

The last topic we will discuss is my recommendation to HealthFirst Hospital Foundation with regard to patch management. I would highly recommend to most companies with over 25 pc to look into using Microsoft’s WSUS. This services is very easy to manage all patches to be installed on your network and can be fully automated if need be. This will result in less internet usage as the server will be the only system downloading the patches. The clients will then be pointed to update from this server instead of the windows update website. The server side can control the release of patches to groups of pc. This is a great help with regard to testing the patches before rolling out all patches to all systems within the organization, possibly preventing a company outage due to a patch conflict with one of the other applications that reside on the pcs. The other applications within the network like antivirus applications will need there own server site management. An example of this is Symantec’s Endpoint Protection has a server side peace that will manage and keep up to date all clients on the network. The other nice thing about having such a system is that is can also remotely install and even reinstall the client side app if need be.

References:

Bird, D., & Harwood, M. (2003). Network+ Exam N10-002.Indianapolis: Que Certification.

http://www.computer-lynx.com/a-virus-or-worm.htm, Retrieved December 14th 2008.

http://technet.microsoft.com/en-us/wsus/default.aspx, Retrieved December 14th 2008.

3 Primary Levels of Database Normalization

Bellow you will find a copy of the original table: Grade Report

Student_ID Student_Name Major Course_ID Course_Title Instructor_Name Instructor_Location Grade
168300458 Williams IT ITS420 Database Imp Codd B104 A
168300458 Williams IT ITP310 Programming Parsons B317 B
543291073 Baker Acctg ITD320 Database Imp Codd B104 C
543291073 Baker Acctg Acct201 Fund Acctg Miller H310 B
543291073 Baker Acctg Mktg300 Intro Mktg Bennett B212 A

This table is in an un-normalized form due to the fact that is has repeating groups. For instance the Student_ID = 168300458 shows here twice and Student_ID = 543291073 shows 3 times. So there is no single field that may be used as a primary key for this table.  We will bring this table to 1NF (first normal form) in a moment, but for now let’s look at the dependence of the attributes of this table.

The functional dependencies are shown below:

  • Student_ID -> Student_Name, Major
  • Course_ID -> Course_Title, Instructor_Name, Instructor_Location
  • Student_ID, Course_ID -> Grade
  • Instructor_Name -> Instructor_Location

The dependencies of the attributes can be shown in the following dependence diagram.

To decompose this table into the first normal form (1NF) we will need to expand the primary key to include the both the Student_ID and the Course_ID and will create a composite key. When the primary key is expanded we now have 5 distinct records in the table. The table would now look like this:

Grade Report (Student_ID, Student_Name, Major, Course_ID, Course_Title, Instructor_Name,                 Instructor_Location, Grade)

Next, in order to get the table from a 1NF to a 2NF we will need to get rid of all partial dependencies. These dependencies are found whenever you have a composite key (key made of more than one field). So in short we will create a couple of tables and these table will contain each part of the former composite key so there will no longer be a composite key.

Students

Student_ID Student_Name Major
168300458 Williams IT
168300458 Williams IT
543291073 Baker Acctg
543291073 Baker Acctg
543291073 Baker Acctg

Courses

Course_ID Course_Title Instructor_Name Instructor_Location
ITS420 Database Imp Codd B104
ITP310 Programming Parsons B317
ITD320 Database Imp Codd B104
Acct201 Fund Acctg Miller H310
Mktg300 Intro Mktg Bennett B212

Grades

Student_ID Course_ID Grade
168300458 ITS420 A
168300458 ITP310 B
543291073 ITD320 C
543291073 Acct201 B
543291073 Mktg300 A

These tables can also be shown as:

Students (Student_ID, Student_Name, Major)

Courses (Course_ID, Course_Title, Instructor_Name, Instructor_Location)

Grades (Student_ID, Course_ID, Grade)

Foreign key: Student_ID to Students table

Foreign key: Course_ID to Courses table

We still have an issue that were a user could enter the Instructor_Name or Instructor_Location incorrectly and there could be a data mismatch. This is referred to as a data anomaly. For instance if a user was to in one record saw the instructor Codd was in location B104 and in another record indicate that Codd was in location B317, there is nothing to prevent this to happen. To prevent this kind of anomaly we will need to pull the Instructor_Name and Instructor_Location out and create a 4th table. In the Course table I will create a new field called Instructor_ID and this will be a foreign key to the Instructors table; due to the fact that Instructor_Name is too vague and could reference two instructors with the same name. In that 4th table the Instructor_ID will be the primary key.  The newly created database will now look like this and are now in a 3NF form:

Students

Student_ID Student_Name Major
168300458 Williams IT
168300458 Williams IT
543291073 Baker Acctg
543291073 Baker Acctg
543291073 Baker Acctg

Courses

Course_ID Course_Title Instructor_ID
ITS420 Database Imp 1001
ITP310 Programming 1002
ITD320 Database Imp 1001
Acct201 Fund Acctg 1003
Mktg300 Intro Mktg 1004

Grades

Student_ID Course_ID Grade
168300458 ITS420 A
168300458 ITP310 B
543291073 ITD320 C
543291073 Acct201 B
543291073 Mktg300 A

Instructors

Instructor_ID Instructor_Name Instructor_Location
1001 Codd B104
1002 Parsons B317
1003 Miller H310
1004 Bennett B212

These tables can also be shown as such:

Students (Student_ID, Student_Name, Major)

Courses (Course_ID, Course_Title, Instructor_ID)

Foreign key: Instructor_ID to Instructors table

Grades (Student_ID, Course_ID, Grade)

Foreign key: Student_ID to Students table

Foreign key: Course_ID to Courses table

Instructors (Instructor_ID, Instructor_Name, Instructor_Location)

Bellow you will find a Chen’s Model ER diagram of the 3NF tables:

References:

Adamski, J., & Finnegan, K. (2008). Microsoft Office Access 2007. Boston: Course Technology.

http://www.sethi.org/classes/cet415/lab_notes/lab_03.html, retrieved May 1, 2009

7 Basic Steps to Designing a Relational Database

There are 7 basic steps to designing a relational database. There are several other steps that could be included in the database implementation and the database life cycle like, research, rollout and maintenance. However, I will be focusing strictly on the database design.

 

The seven basic steps are; first, determine the purpose of your system. Next, you will need to determine the entities for your system. These entities will become your table names. Next you will decide what attributes for those entities will be necessary for the purpose of the system. These attributes will become your fields within the tables. Next, you will need to determine what attributes/field names will help identify the records/entities as being unique. This field(s) will make up the primary key for each table. After you have determined the primary keys for the tables you will need to define the relationships between these tables. The best way to truly do this is to follow the standard steps in table normalization and at minimum get the table to a 3NF form. The process of normalization is the sixth step in the database design. The 7th, and final step, in the database design in the actual data population.

 

In short the key steps are;

  1. Determine the purpose of the system
  2. Determine what entities/tables will be included
  3. Determine what attributes/fields will be included
  4. Indentify unique fields (primary keys)
  5. Determine relationships between tables
  6. Refine the design (normalization)
  7. Populate the tables with raw data

 

References:

http://www.databasedev.co.uk/database_design_requirements.html, retrieved May 1, 2009

 

Adamski, J., & Finnegan, K. (2008). Microsoft Office Access 2007. Boston: Course Technology.

 

Database Table Relationships

The first type of relationship we will discuss is the one-to-one relationship. A one-to-one relationship is where an entity from within one table references one and only one entity within another table. For instance, within most offices you will find users that have laptops. This relation is typically found as a one-to-one relation as follows, one user per one laptop. Users do not typically share laptops like they may with standard desktops. The users would be within one table and the laptops would be found in another table if they were represented within a relational database. This relation can be seen as follows:

 

Users (User_ID, First_Name, Last_Name, Phone, Service_Tag)

Foreign key: Service_Tag to Laptops table

Laptops (Service_Tag, Brand, Model, CPU, Memory)

 

The second type of relationship you will find within a referential database is a one-to-many relationship. A one-to-many/many-to-one is all a matter of perspective. With a one-to-many relationship there is one entity within one table that relates to many entities within another table. My example here is something that you will commonly find within a small office, 5-10 users. Typically a small office will only have 1 networked printer and many users will use that printer. This relationship can be shown in the following diagram.

 

Users (User_ID, First_Name, Last_Name, Phone, PrinterID)

Foreign key: PrinterID to Printers table

Printers (PrinterID, Brand, Model, Location)

The last relationship I will show is the many-to-many relationship. With a many-to-many relationship you will find many entities within one table that relate to many entities in another table. To do this properly you need what is called a intersection table, junction table or a link table. I personally use the term link table. This link table is used to split up the many-to-many relation and will create two one-to-many relationships this greatly simplifies the logic within the database and can help prevent issues that may come up latter down the road. This helps enforce referential integrity. Let’s take the previous example with the small office and the single printer and many users and expand this to a larger office with many printers. For instance, with an office of 100 users there is likely going to be at least 1 printer for every 20 or 25 users. In addition to this each user may print to more that or printer depending on what they want to print, some printer may be color while others only black and white. This example can be show as such:

 

Users (User_ID, First_Name, Last_Name, Phone)

PrinterUserLink (User_ID, PrinterID)

Foreign key: User_ID to Users table

Foreign key: PrinterID to Printers table

Printers (PrinterID, Brand, Model, Location)

 

References:

Adamski, J., & Finnegan, K. (2008). Microsoft Office Access 2007. Boston: Course Technology.

 

Project Management Risks

There are numerous risks that have the potential of causing a project to be looked upon as a failure. The primary categories of these risks according to the PMBOK are: Technical, External, Organizational, and Project Management. (Project Management Institute, 2008, p. 280) Within each of these primary categories reside many subcategories. Within the Technical risk type we have the subtype of Requirements, Technology, Complexity and Interfaces, Performances and Reliability, and Quality. Within the External risk category we find the sub types of Subcontractors, Regulatory, Market, Customer, and Weather. We find under the Organizational category sub types like: Project Dependencies, Resources, Funding, and Prioritization. Lastly, within the Project Management category we find the sub types of Estimating, Planning, Controlling and Communication. (Project Management Institute, 2008, p. 280) There are several other categories and sub categories that you may find within other Risk Breakdown Structures (RBS), however this is the list that he Project Management Institute has deemed noteworthy for general project management risks. Creating and RBS is almost a necessity to any project. It reminds those involved what Risk Identification areas to think about. (Ritter, 2008)  Now let’s describe these a little more.

One of the primary risk categories are technical risks. Again these include risks such as technology its self not being able to perform the way we had expected. Another example would be the technology that is required to complete a project like a router to setup a network may be found as dead on arrival to the new site. Yet another example that we find within this category would be potentially a system not being able to meet all the requirements as expected and needed to complete the project.

External factors may, and should, also be viewed as risks. These include factors such as subcontractors not showing up for work or not having the real experience and knowledge in order to fulfill the requirements of the project within a timely manner. Another example that I personal ran into two weeks ago for a site more in NYC was that in NYC there are many property codes that needs to be signed off by larger property managers that move very slow. In order to work with them in a timely manner often times requires knowing the right people that can get the job done. Another factor that would fall into this category that we could have faced during this move would have been weather conditions. Often it is not a good idea to move computer equipment between sites in the rain. Luckily, it did not rain until the day after the move.

The next category we will discuss more in-depth is that of Organizational risks. An example of an organizational risk that many of us find within project is that of needing a deliverable from one project for the current project. If that project is running behind with their timeline, that will affect your project’s timeline, unless it is properly accounted for. Another example we find within this category is that of funding. A manger or CEO may decide due to poor financial status of the company that your project now only receives 70% of the projected funds and yet you are still required to produce the same quality results. Most often in order to accomplish this you may be required to take more time.

The last main category that the PMBOK discusses is that of project management itself. This categories is a big one if not properly accounted for and that about the project may be doomed from the start. Within this category we find tasks such as estimating of budgets, time, materials, etc. We also have tasks such as controlling and monitoring the status of the project and the scope of the project that fall into this category. Proper planning of these and other tasks as well as proper communication with the project team members and stakeholders are some of the most critical sub categories we find within this category.

These risks need to be managed by first accounting for them. Secondly, after you have accounted for the risks you need to decide the potential risk that they bring to the project by ranking them by first how likely they are to occur and secondly how much the impact to the project they will cause if they occur. Then we will create contingency plans for those risks that are more likely to occur and more likely to cause a great deal of trouble for the project if they occur.

There are many methods to deal with risk. These are first to avoid risk. In order to accomplish this it requires to typically change the project plan. If we cannot fully avoid the risk we may wish to at minimum mitigate the risk so that it is less likely to occur or will be less likely to fully disrupt the project. Another option would be to transfer that risk to an outside organization via consultants. This method in my opinion is not always a good option as the end result still may be you without a completed project and you do not have control any longer over those consultants. The last option that you have it to not avoid or mitigate or transfer, simply accept the risk and plan for it to happen if it does. (Project Management Institute, 2008, p. 304)

Project Management Institute. (2008). A Guide To The Project Management Body of Knowledge. Newtown Square: Project Management Institute, Inc.

Ritter, D. (2008, October 11). Risk Breakdown Structure (RBS). Retrieved August 22, 2010, from Project Management: http://certifedpmp.wordpress.com/2008/10/11/risk-breakdown-structure-rbs/

Cost Controlling Measures and Cost Forecasting

Cost Controlling Measures

A project can be made more successful with the help of typical project controlling measures.  According to Crump (2010), these measures need to be incorporated into the planning and execution of the project.  Tracking, reviewing and regulating the progress and performance of the project are some standard processes that are included in the “monitoring and controlling” process group.  These processes will identify necessary changes and initiate those changes in the project plan.

The performance of the project is monitored and assessed frequently during the monitoring and controlling processes, which can help in identifying issues in advance.  The ongoing regular monitoring provides the team and project manager with data concerning the status of the project and identifies areas where further data is required.  Besides monitoring and controlling the work being completed during each project phase, the “monitoring and controlling” process group monitors and controls the entire project.

Each project is required to be checked thoroughly at recurring points all through the lifecycle to make sure that the necessary technical performance takes place on schedule and within the budget that was approved.  Measures that aid in controlling cost, schedule and quality are developed by project managers to gauge the success of a project.  Formal and informal reports are assembled from the results of these measurements.  The project team and senior management use the reports to spot differences between planned performance and actual performance and identify the causes. Actions are then taken to limit the effects of these variations on the project’s resources, schedule or budget.

All projects consist of three key factors. These factors are time (schedule), cost, and quality.  Time is set by the start and end dates, cost is permitted by the budget of the project and quality is prescribed by the client.  According to Dube (1999), controlling measures are used by project managers to make sure the project stays within these limits, and are known as the “triangle of objectives”.

Project schedules can be controlled through the use of effective tools such as Gantt charts which provide a clear picture of the project’s current state.  These charts are easily understood despite the huge amounts of information that they contain.  Gantt charts require updating frequently but are maintained easily as long as major revisions to the schedule are not made or task requirements are not changed.  The use of milestone reporting is another way of controlling a project’s schedule.  A milestone is a key event that leads to achieving an objective of the project.  The life of the project is provided in a sequential list of critical events.  Control measures for project schedules are developed during the planning phase.  These schedules are monitored and updated accordingly during the execution phase of the project.

Concerning controlling measures, many managers consider the associated project costs to be more vital than the schedule, notes Dube (1999).  Senior management requires appropriate cost status reports.  Throughout the planning stage, status reports takes the form of developing estimates for project cost, which are added to the initial budget of the project.  When the project begins, the data can be forwarded as part of cost schedule or cost control reports.  The team can develop cost-minimizing reports if it considers crashing or accelerating the project.

Cost scheduling is another controlling measure.  During the planning phase of the project, the planned costs are matched to the schedule of the project through cost reports.  Financial officers are permitted to construct a financial plan so that appropriate funds can be made available to support the requirements of the project (Dube, 1999).

During the execution phase, cost control compares scheduled expenses to actual expenses using cost control reports.  The report’s function is to predict or spot probable cost overruns.  When a cost overrun is expected, a request for additional funds is sent as soon as possible to senior management.  If obtaining the additional funding cannot be done quickly and the cost overrun is greater than the financial tolerance of the project, additional financial obligations should not be made until a comprehensive project cost analysis is completed.  Dube (1999) mentions that even if this shortage of financial commitment may look somewhat harsh, it is the best way since project bankruptcy is prevented which would make completing the project impossible.

Another controlling measure used during planning or execution is cost minimizing.  Cost minimizing is shortening the project’s life with the smallest possible additional expense.  The project team determines if a project should be crashed or accelerated by first considering the cost of extra equipment, manpower and overtime.  Dube (1999) states the cost of getting the work done using the existing schedule will be compared to the “crashing” cost by the project team.

Throughout the project, quality control is also performed.  These controlling measures monitor and record the execution results of quality activities in assessing performance and recommending required changes (Kerzer, 2009).

Overall, due to the complexity and cost of projects, organizations need a system of control that can efficiently measure the progress of the project.  Selecting effective controlling measures involves contributions from the senior management and project team.  The organization’s goals should be reflected by these measures.  Status reports, which detail the cost, schedule, and project activity’s performance, are pulled together from the result of these measures.  Detailed scheduling and cost control techniques will assist organizations to analyze potential delays or cost overruns during execution to allow for corrections before it is too late.

Forecasting Techniques

In order to plan the budget for a project, many costs are not known in advance.  Forecasting is used in order to estimate what these costs will be.  Forecasts can never be completely accurate, but can provide a good idea of what to expect if the appropriate techniques are used.  Some forecasting techniques are less accurate than others but are used to start off planning.  When more detailed data becomes available, more accurate forecasting techniques can be used.  What follows is a survey of forecasting techniques and their benefits and limitations.  The techniques are divided into four major areas: ad hoc, numerical/statistical, simulation, and earned value analysis.

Ad Hoc Forecasting Techniques

Ad hoc forecasting techniques are simple to implement and available without detailed data.  The limitations of ad hoc techniques are that the estimates are usually only accurate between plus or minus fifteen to thirty-five percent within the project scope.  They can be biased and are subjective, making them suspect and less reliable techniques that use hard data (Kerzner, 2009).

There are several ad hoc forecasting techniques.  The following give a short description of a few these techniques:

  1. Rough Order of Magnitude (ROM) – A very rough estimate using techniques such as expert judgment, analogous and three point estimates.
  2. Expert Judgment – Using experience to come up with an estimate.
  3. Rule of Thumb – Personal rules from experience are used to compare with estimates.  If the difference between rule of thumb and estimate is large, then the estimate and the underlying assumptions need to be re-examined (Pandian, 2003).  They provide a sanity check.
  4. Analogous – Using projects of a similar scope and capacity to come up with a rough estimate.
  5. Three point estimates – Making a most likely, optimistic, and pessimistic estimate and then adding all three after multiplying the most likely estimate by four and then dividing the total by six to come up with an average.  Using this equation helps to remove bias from the most likely estimate (Pandian, 2003).

Numerical/Statistical Forecasting Techniques

Numerical and statistical forecasting techniques use hard data, usually historical data, for making a forecast.  Although these techniques provide some of the most accurate forecasts, they have problems as well.  These forecasting techniques require detailed information and they can be expensive and time consuming.  Oftentimes relationships between different variables have to be determined (Kerzner, 2009).  If an important relationship is not identified or if the details of the relationship are incorrect, then the forecast will be less accurate.

There are several numerical and statistical forecasting techniques.  The following give a short description of a few these techniques:

  1. Moving average – Taking the value of a chosen number of previous periods to use as an estimate for the next period (AIU Online, 2010).
  2. Weighted average – The same as moving average except giving greater weight to more recent periods (AIU Online, 2010).
  3. Linear regression (Least Squares) – A mathematical algorithm is using past data to create a line showing the direction.  The line can then be carried forward to create a forecast (AIU Online, 2010).
  4. Exponential Smoothing – Combines most recent actual figure with the previous period’s forecast in order forecast an upcoming period (AIU Online, 2010).
  5. Regression analysis – Equations that are used to analyze the relationship between a dependent variable and one or more independent variables.  One of the independent variables can be changed to see how it affects the dependent variable.  For example, how does the cost relate to the size of the house if we change from using laminate to wood flooring?  Using only one independent variable is not very realistic because reality is not two-dimensional, so multiple independent variables are desirable.
  6. Time series – Making use of the fact that recent history is a good predictor of the near future (AIU Online, 2010).
  7. Trend analysis – Finding trends in historic data that can be used to make forecasts.

Simulation Forecasting Techniques

Simulation forecasting techniques allow users to change various conditions and factors about a specific event or situation to arrive at the prediction of future conditions.  These simulation techniques include using simple to complex model systems.  The main limitation of simulations is if the initial assumptions are incorrect, then they will be magnified as the simulation is run (Walonick, 1993).

There are several simulation forecasting techniques.  The following give a short description of a few these techniques:

  1. Monte Carlo Simulation – It is also known as probability simulation.  A simulation is run over and over again until the probability that any particular outcome will occur can be determined (What is Monte Carlo, n.d.).
  2. Mathematical Analog – Using an equation to simulate a project’s costs (Walonick, 1993).

Earned Value Analysis Forecasting Techniques

In Earned Value Analysis, earned value (EV) is a number that is equal to what was budgeted for work that has already been performed.  EV is used in conjunction with other values, such as planned value, actual cost, and the baseline of the project, to determine if the project is on budget or not and if corrective action needs to be taken (Kerzner, 2009).  The main limitation of earned value analysis is that the budget must be close to reality or you will always be off track.

Individual Forecasting Techniques

Different classes of forecasting techniques are used as a project progresses through planning all the way to closing.  Ad hoc methods are used early in a project before detailed data is available.  As a project moves through planning, numerical/statistical and simulation forecasting techniques are used to produce more accurate forecasts as sufficiently detailed data becomes available.  During the execution phase of the project earned value analysis is used to keep a project on track and again during the closing phase to provide performance reviews and variance analysis to improve future projects.

An example from each class of forecasting follows with the details for implementing that technique.  Rough order of magnitude is used to represent ad hoc methods.  For the numerical/statistical class exponential smoothing is detailed.   Monte Carlo simulation is used to represent simulation forecasting techniques.  Finally, how to use earned value analysis is covered.

Rough Order of Magnitude

Rough order of magnitude or ROM is just a fancy way of saying, ballpark estimate (Business Dictionary, 2010).  Project managers might not have hard data to produce an accurate budget during the initiation phase of a project.  Therefore, rough estimates must be taken in order to determine about how much money is needed to complete the project.  Estimates can be fairly accurate if the information gathered for the estimate came from previous projects.  ROM is a tool that is used for future application development lifecycle costs based on past projects (IBM, 2006).

A rough order of magnitude can be calculated using a variety of tools and techniques.  The following give a description of a few tools and techniques:

  1. Expert judgment – A well seasoned project manager will have plenty of experience working on various projects.  This project manager will be able to provide information on cost based on prior similar projects.  This cost estimate will be more reliable over someone that has not worked on many projects.
  2. Analogous estimating – If a project to build a mile long road costs $10,000, chances are a mile long road will cost about the same in another area.  This method of estimating takes information from past project such as scope, cost, budget, and duration to calculate how much the current project will cost (PMBOK, 2008).
  3. Three-point estimates – This method of calculating a rough estimate originated with the PERT program (PMBOK, 2008).  It takes the most likely cost, the optimistic cost (best case scenario), and the pessimistic cost (worst case scenario) and calculates the expected cost by adding all three costs and dividing the answer by six.  This method will actually give project managers a range of four estimates to work with.
  4. Vendor bid analysis – All projects will need resources that should be known by the project manager.  The project manager can then get a ball park estimate of the project based on what the selected vendor will charge for the resources needed.
  5. Project management estimating software – When all else fails, rely on software to get the job done.  Many companies are concerned with cost and have developed software to perform calculations to come up with the best cost effective solutions.  So when time is short and a cost estimate is needed, find the best the cost estimating software.

A rough order of magnitude estimate is a tool that is used by project managers to get an idea of how much the project will cost.  Prior experience and historical data will play a role in ROM.  The more experience a project manager has and the more data available from similar projects, the better the chances of creating an accurate estimate.  Estimates with greater accuracy using hard data are used as more information becomes available, but ROM provides a starting point for project managers to begin planning.

Exponential Smoothing

Within the category of numerical/statistical forecasting, we find many techniques such as weighted smoothing, linear regression, curve fitting, decomposition and exponential smoothing. All of these methods look at the trends of the past and then use mathematical techniques to attempt to predict the future. In order to perform these mathematical techniques, we commonly assume that whatever caused the trend of the past will still be at play with a similar strength in the future. This is typically a good assumption when forecasting the near future but as the forecasted time increases, it becomes less true (Walonick, 1993).

In order to fully explain the statistical forecasting technique of Exponential Smoothing, we must first discuss the subject of Time Series. A Time Series is simply any trend that provides data over a given time. Whenever a person hears the term “trend”, we know they are discussing a Time Series. In order to allow the reviewing of Time Series charts to be truly effective and not simply a spiky random chart that is almost unreadable, we must use some form or another to “smooth” the chart out so that we may see true trends within the data. In order to complete this task, we often use techniques such as Floating Averages and Exponential Smoothing. With these techniques in short, we are making the attempt to fill in the potential points that could have been within the points of data that were collected. A few of the downsides when using techniques other than Exponential Smoothing are that other methods do not always take into account the first and last point within the data, the calculation is tedious and not very easy to change, and lastly floating average makes it difficult to truly pull a trend. Exponential Smoothing addresses each of these limitations that are found within other statistical forecasting methods (Janert, 2006).

Exponential Smoothing can be found in three different forms. The first is referred to as single, the second is referred to as double or Holt, and the third is commonly known as triple or Hold-Winters Exponential Smoothing. The first method finds an approximation for the data discovered between the values or random data. The second method allows for a linear trend to be extracted. However, the third form of Exponential Smoothing also takes into account regularly reoccurring changes within the data (common seasonal trends) (Janert, 2006). In other words, if the client is regularly utilizing such data in order to predict the needs for funding within the project that has occurred over a given time and they have season data that could contribute to the adjusting of the predictions, then the third form of Exponential Smoothing may be the technique that would be best for the analysis. The basic equations for their method are given by: (Engineering Statistics Handbook, 2010)

 

  • y is the observation
  • S is the smoothed observation
  • b is the trend factor
  • I is the seasonal index
  • F is the forecast at m periods ahead
  • t is an index denoting a time period

Exponential Smoothing is also easy to adjust, and in such, it is easy to produce many similar forecasts within a short time frame (Statistical Forecasting, 2006). In conclusion, Exponential Smoothing assists in creating forecasts by combining the actual data and the forecasted data of the prior period. Exponential Smoothing can be changed quickly on the fly. This forecasting technique is one that should be commonly considered when attempting to forecast inventory control and material scheduling.

Monte Carlo Simulation

When forecasting and attempting to predict the future needs of the project, assumptions need to be made. For instance, the cost of the project over time or the number of man hours needed for a task. By utilizing prior collected data and introducing your own experience into the estimate, you may be able to increase the accuracy of that estimate. However, there are other variables that have not been taken into account, including risk. In order to accurately incorporate this risk and produce a good probability that your estimates are as accurate as possible, the project manager must estimate based on a range of possible outcomes. By utilizing this range of possible outcomes, the PM can develop a more complete view of what may happen in the future. When creating a forecast by using a range of estimates, the total estimate will also be a range. For instance if for each task within the project we produce minimum, maximum and potential estimates of time or budgetary needs, the total for the project will include a minimum, maximum and potential estimate of the time and budgetary needs (RiskAMP.com, 2010).

The Monte Carlo project was founded by the government during the creation of atomic weapons. John con Neumann found out that if many random samples are compiled together they will create a vision that is not random at all and quite accurate (Vargas, 2004). The form of risk assessment that the Monte Carlo Simulation will attempt to make clear is that of uncertainty within the project. In short, the Monte Carlo Simulation Forecasting technique is used to let you, the project manager, know the likelihood a result will be of a project time line, budget, etc. This method works like this. First, the project manager would estimate the minimum time frame for a task within a project. Then, the PM would estimate, based on experience, the potential maximum time it will take for that task. Next, he or she will decide, based on experience, what the most likely time that it will take. He or she will repeat this process for all tasks within the project and then simply add up the minimums together. This would be the minimum time for the project. Then, the PM would do the same for the maximum and the most likely time frames to get the total maximum time and the most likely time that it will take to complete the full project. Now we know what the project’s minimum time for completion and what the maximum time frame for completion of the project could be. Now here is where the Monte Carlo Simulation really comes into play. The simulation is most often completed by a computer that will reiterate the tasks of estimating up to several hundred times. As the number of times increases, we become more comfortable that the estimations are accurate and we can provide with relative certainty that our estimates are as accurate as possible (RiskAMP.com, 2010).

Earned Value Analysis

Earned value (EV) analysis is a time series analysis that is used for forecasting project performance (Haughey, 2010).  EV analysis is used to determine if the project is going according to plan or if the project is getting off track.  In addition, EV analysis will give project managers an indication of how much money and time have been spent in comparison to where the project is in term of progress.  This is particularly important when it comes to preventing the project from either running out of time or money.

In order for the project manager to create an earned value analysis model, he will have to first create a Work Breakdown Structure and assign a budget to each work package to the lowest level (Haughey, 2010).  The project manager must then create a schedule showing how long it will take to complete the project.  As a section is completed, the work is earned and values are placed on the schedule.  When a section of the project is completed, the work completed is compared to the schedule that was created.  At this point, the project manager can determine if the project is on target or deviating from the plan.  Project managers can then make the necessary decisions early enough in the project before it is too late to put the project back on track.

Earned value analysis requires a little more than just comparing a task to the schedule.  There are mathematical equations to help project managers complete an earned value analysis.  Rupen Sharma (2010) states that the following should be used for an earned value analysis:

  1. Estimate at completion (EAC) – determines the total cost of the project after completion based on the current progress of the project.
  2. Estimate to complete (ETC) – determines how much money will be needed to complete the project based on the current progress of the project.
  3. Variance at completion (VAC) – compares the total cost at the end of the project to the project budget.
  4. To-Complete Performance Index (TCPI) based on budget at completion (BAC)determines what future projects should implement in order to meet budget goals.
  5. TCPI based on EAC determines what future projects should implement to meet the project’s cost based on past project performance.

Now to put all this into a scenario to make sense of it all, let’s say there is a company developing a project known as Project X.  The planned budget for Project X is $10,000 a month for eight months.  During some analysis, the project manager notices that the project is 30% complete and it has consumed $40,000 after just two months.  This does not automatically mean the project is spending too much since the first few months may require more expenditures than the final months.  The project manager must then perform a few calculations to find the planned value and the earned value by first finding the BAC, actual cost (AC), planned completion, and actual completion so it can be determined if the project is on budget or not.

The BAC will simply be the planned budget of $10,000 per month for 8 months which equals $80,000.  The actual cost so far in the project after two months is $40,000.  The planned completion is the months completed so far divided by the total planned months.  So this will be 2/8 = 0.25 or 25% of the project completed.  The actual project completed so far is 30%.  With all this information, the project manager can then find the planned value, earned value, and cost performance index.

Planned value (after two months) = 25% * $80,000 = $20,000

Earned value (after two months) = 30% * $80,000 = $24,000

Cost performance index = $24,000/$20,000 = 0.6

If the analysis is stopped at this point, the project manger can see that the project will be over budget.  To get precise numbers of just how over budget the project is, the project manager will then have to find the EAC, ETC, and VAC at a minimum.

Estimate at completion (EAC) – $80,000/0.6 = $133,333.  This amount represents what the project will cost at the end.

Estimate to complete (ETC) – $133,333 – $40,000 = $93,333.  Since $40,000 has been spent so far on the project, it will require another $93,333 to complete.

Variance at completion (VAC) – $80,000 – $133,333 = -$53,333.  This is difference in the actual cost of the project versus the planned cost.

The earned value analysis can be a great tool for project managers to monitor the progress of a project.  It will require some mathematical calculations, but when used properly earned value analysis can spot potential problems with the project in the early stages.  This gives project managers enough time to react to the situation and make necessary course corrections.

Conclusion

Many organizations do not implement proper cost controls on projects to catch when project costs are deviating from the plan so corrective action can be taken.  Forecasting is also not properly used to avoid common problems with foreseeable costs.  As organizations properly implement cost controls and accurately forecast project costs throughout planning and execution, projects will stay on budget and be able to make corrections when necessary.

References

AIU Online. (2010). IPM631: Unit 4: Forecasting for decision making [Multimedia presentation]. Retrieved August 9, 2010, from AIU Online Virtual Campus. Technical Project Leadership: IPM631- 1003B-01 website.

Business Dictionary. (2010). Rough order of magnitude. Retrieved August 13, 2010, from

http://www.businessdictionary.com/definition/rough-order-of-magnitude.html

Crump, K. (2010). IT Project Controlling Measures. Retrieved August 10, 2010, from http://www.pcdocsac.com/blog/?page_id=24

Dube, L. (1999). Establishing project control: Schedule, cost, and quality. Retrieved August 10, 2010, from http://www.allbusiness.com/human-resources/workforce-management/342527-1.html

Engineering Statistics Handbook. (2010). Triple exponential smoothing. Retrieved August 14, 2010, from http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc435.htm

Haughey, D. (2010). What is earned value? Retrieved August 12, 2010, from http://www.projectsmart.co.uk/what-is-earned-value.html

IBM. (2006). Method and application for creating rough order of magnitude (ROM) cost estimates and summarizing the results by application, lifecycle phase, and project months. Retrieved August 13, 2010, from http://priorartdatabase.com/IPCOM/000138907

Janert, P. (2006). Exponential smoothing. Retrieved August 14, 2010, from http://www.toyproblems.org/probs/p02/

Kerzner, H. (2009). Project management: A systems approach to planning, scheduling, and controlling (10th ed.). Hoboken, NJ: John Wiley & Sons, Inc.

Pandian, R. (2003). Software metrics: A guide to planning, analysis, and application. Boca Raton, FL :CRC Press.

Project Management Institute. (2008). A guide to the project management body of knowledge: (PMBOK guide) (4th ed.). Newtown Square, PA: Project Management Institute, Inc.

RiskAMP.com. (2010). What is Monte Carlo simulation? Retrieved August 14, 2010, from http://www.riskamp.com/files/RiskAMP%20-%20Monte%20Carlo%20Simulation.pdf

Sharma, R. (2010). Examples of using earned value analysis for calculating project performance. Retrieved August 12, 2010, from http://www.brighthub.com/office/project-management/articles/57945.aspx

Statistical Forecasting. (2006). Statistical forecasting methods. Retrieved August 14, 2010, from http://www.statisticalforecasting.com/forecasting-methods.php

Vargas, R. (2004). Earned value probabilistic forecasting using Monte Carlo simulation. Retrieved August 14, 2010, from http://www.ricardo-vargas.com/articles/earnedvaluemontecarlo/

Walonick, D. (1993). An overview of forecasting methodology. Retrieved August 14, 2010, from http://www.statpac.com/research-papers/forecasting.htm

What is Monte Carlo simulation? (n.d.) Retrieved August 13, 2010, from http://www.riskamp.com/files/RiskAMP%20-%20Monte%20Carlo%20Simulation.pdf

 

Forecasting Human Resource Needs

Human resource needs are typically the most costly piece of any project. It is especially costly if either over estimated and therefore over compensated for or under accounted for and therefore forcing the project to not meet the desired deadlines. If the later occurs often companies are forced to either push out deadlines or hire in costly outside contractors to force the project back on track with the desired timeline. Either way ultimately undesirable results are experienced. Only when resources are properly accounted for, and budgeted for, does the project team find themselves in a winning position.

There are many methods that are available to assist with the forecasting of these human resource needs. There are two major approaches to forecasting techniques, qualitative and quantitative. Some examples of qualitative approaches to forecasting are estimation techniques, expert opinion, Delphi, and averaging. A few of the possible techniques that could be used with the quantitative approach are trend analysis, regression analysis, simulation techniques, and Markov.

In order to monitor and insure the fulfillment of the human resource requirements, the organization will want to include items such as Performance Reviews. Performance reviews measure, compare, and analyze schedule performance such as actual start and finish dates, percent complete, and remaining duration for the work in progress. (Project Management Institute, 2008, p. 162) During the project completion the Performance Review is used to track the current status of tasks within the project including the completion percentage. Based on the scheduled start and end date and the current completion percentage the schedule variance (SV) and the schedule performance index (SPI) can be calculated. This can be easily completed using a simple spreadsheet within Excel. Project managers use Earned Value Management (EVM) techniques to compare the schedule performance (achieved by getting information from the Performance Review we discussed earlier) and the cost of the performance (what have we spent so far and what will we spend). In short what we are trying to achieve is creating a basic business value for what we spent model. This is often a report or graph that will be provided to the project sponsors before during and after the project completion. The Earned Value (EV) can be compared to actual costs and planned costs to determine project performance and predict future performance trends. (KIDASA Software, 2010) Earned Value shows how far off we are with regards to the budget and time that should have been spent in comparison to the actual amount of work done to date. (Haughey, 2009) Another key term that needs to be discussed with regards to this subject of schedule variance (SV) and schedule performance index (SPI) is that of the Variance Analysis. The SV and the SPI are used to assess how big of a variation we have in comparison to the schedule baseline. (Project Management Institute, 2008)

The Monte Carlo simulation method is a way of simulating ranges of projected needs for project resources, time, etc. In order to complete a Monte Carlo Simulation the project manager will need an application that can reiterate the simulation up to several hundred times. In short all that the project manager needs to do is create a baseline experienced oriented guess on what the resources that will be needed for each task. He or she will need to provide a low value and a high value for the simulation to be addressed. These highs and lows will for each task within the project will be used to produce a high and low value of resources that will be needed for the project. The simulation would then be ran several hundred times in order to increase the accuracy of the estimated values. (RiskAMP.com, 2010) One application that could be used to perform this is the Risk AMP application. This or course could be used in order to project the human resource needs of the project.

When looking at the human resource needs many other variables will need to be taken into account when discussing international projects with resources located within many disperse countries. One of the biggest variables that needs to be taken into account is the potential for different work hours. Some countries only work 4 days a week, while others may work 6. Some countries take long afternoon breaks and then come back to work during the cooler evening hours. These discrepancies with work schedule must be accounted for and potentially the project manager may need to set different timelines within each country in order to complete the work. Another option for the project manager is to simply look at the number of man hours it will take in order to complete the task and simply within those countries that do not work as much, increase the head count in order to maintain the schedule. This will result still in the same number of man hours. Some countries may also be more productive during the work hours and therefore require less time to achieve the same tasks. For instance many other countries, at least perceive, US employees as people that often stand around the water cooler all day and do not produce well. Again a statistical baseline will need to be created in order to achieve accurate goals and forecasts within the project.

The To-Complete-Cost Performance-Indicator or TCPI is used as an indicator index that shows how efficient the resources working on the project should be used during the remaining time of the project. In order to calculate the TCPI the project manager will need to know the total budget for the project, earned value (discussed previously), and the actual expenditure of resources. In short the TCPI = ( Total Budget – EV ) / ( Total Budget – AC ). (Tutorials Point, 2010) In short, TCPI is used to determine what the project performance that is required to complete project as budgeted.

Haughey, D. (2009). What is Earned Value? Retrieved August 15, 2010, from Project Smart: http://www.projectsmart.co.uk/what-is-earned-value.html

KIDASA Software. (2010). What is Earned Value Management? Retrieved August 15, 2010, from Earned Value Management: http://earnedvaluemanagement.com/

Project Management Institute. (2008). A Guide To The Project Management Body of Knowledge. Newtown Square: Project Management Institute, Inc.

RiskAMP.com. (2010). What is Monte Carlo Simulation? Retrieved August 14, 2010, from RiskAMP.com: http://www.riskamp.com/files/RiskAMP%20-%20Monte%20Carlo%20Simulation.pdf

Tutorials Point. (2010). To Complete Cost Performance Indicator (TCPI). Retrieved August 15, 2010, from Tutorials Point: http://www.tutorialspoint.com/earn_value_management/cost_variance.htm

Six Software Development Frameworks

Introduction and Background

In software development, there are a variety of software development frameworks available for Innovative Systems’ consideration.  These frameworks abide by a specific life cycle to make sure of the software development’s success.

 

Waterfall Model

The waterfall model is one type of software development framework.  This was the first model introduced and pursued generally in “software engineering” since 1970 states Parekh (2005).  In the waterfall model, the entire software development process is split into distinct phases.  These phases are: requirements specification, design, implementation, integration, testing, installation and maintenance.  The development advances in sequence through these phases.  Just like in a waterfall, the movement “flows” downward from one phase to another (TechRepublic, 2006).  According to Steele and Swiler (2002), the output of a certain phase serves as the input for the next phase.  In this particular framework, you are only supposed to transfer to a phase when the prior phase is accomplished.  Between these phases, there is no overlap or bouncing back and forth notes Select Business Solutions (2010).

The waterfall model can be classified as a systems development life cycle.  It involves a phased sequence of activities.  According to Bista (2010), there are distinct goals in every phase of development.  This model starts with the requirements analysis of the system.  Then, this leads to the system’s release and later maintenance.

 

 

Dynamic system development method (DSDM)

Dynamic system development method (DSDM) is another software development framework which develops software systems dynamically through the use of prototypes.  The framework is basically a variant of the rapid application development method (RAD) process.  According to Freetutes.com (2010), its aim is to produce systems according to time and budget while accommodating for revisions in the requirements during system development.  This is practical for systems that call for to be developed in the shortest amount of time and where the goals cannot be stalled at the beginning of the application development.  The following phases are part of DSDM:  feasibility study, business study, functional model iteration, design and build iteration and implementation.

 

DSDM could be categorized as a mold of the systems development life cycle because it is an in-depth plan of how to build, develop, implement and close the project.  The whole plan outlines how the system is born, raised and later retired from its purpose.

 

The waterfall model is a non-iterative approach while DSDM is an incremental and iterative approach.  One element of the waterfall model is that the phases cannot start unless the previous phase is completed.  This software development model has to follow the phases in sequence (Select Business Solutions, 2010).  On the other hand, DSDM is the opposite in regards to this.  DSDM allows the project to go back to preceding phases in order improve it and the final product.  When the requirements are continually changing in a project, the DSDM is quick to respond whereas the waterfall model is not appropriate (Freetutes.com, 2010).  Both of these frameworks, waterfall model and DSDM, are classified as a systems development life cycle.

 

Agile Software Development

With Agile Software Development, end user involvement is very critical to the project’s success. In many cases due to the way that Agile development works, this is not possible. However in these cases, you will want to appoint senior-level end users which will represent the majority of the end users that you are trying to satisfy. This way, the smaller projects are prioritized according to what the end users need the most. In order to provide the quick results that are requested of a development team in an agile environment, the team needs to have decision making abilities. In order to accomplish this, you will again want to insure that they have direct contact with the senior-level end users on a day-to- day basis. Where most project management/software development methods try and stake down all requirements upfront with a timeline, the Agile approaches this much differently. The agile method of development works on the idea that the end users do not know what they want until they begin to see, and use, the end product. Based on this philosophy, the developers try and get a base model out to the end users and then the end user evolves the project as they see the need. This provides an end user with a product that they will use, not a product that has extreme functionality but will never be used (Waters, 2007).

 

With Agile, we want to capture the system requirements at a high level only. This philosophy follows where we last left off in that with Agile, we want to get the minimum requirements the end users want and need. Then, provide them with that one part completed very well. From there, we will focus on the next part or activity. Of course in order to have a good and stable end product, we still need to have an idea of the end target for the project and take every end user request as a direction to lead us towards the end goal. The overall concept of develop small, incremental releases and iterate is one of the main focuses that we are trying to accomplish by using the agile methodology. By doing this, we are reducing risk, increasing value, being more flexible, and providing better cost management. If the project runs out of money and management kills the project, the end users are not left with nothing. In fact, they may be left with a very good product that they may use and the project may be able to be returned to once funds again become available. In other words, with Agile we will want to insure that we complete each feature before we move on to the next enhancement. As such, testing will be completed throughout the full life of the project versus at the tail end of the project once all aspects of the project are complete. Again, communication and collaboration between all stakeholders is essential for this philosophy to work and work well (Waters, 2007).

 

When comparing Agile to the Waterfall approach, you will find that both use Use Cases and UML diagrams for analysis and design. DSDM is often considered the original agile development method. SCRUM is another agile development method that focuses on managing tasks/activities within a team versus individual developers. XP can also be considered an agile method of development. However, it focuses heavily on the software developments process and is shooting for a high quality en product (Waters, 2007).

 

The Agile Software Development methodology is a member of the System Development Life Cycle as there is a definitive process of planning, analyzing, designing and implementing each phase of the program as it is developed. Again, the major difference with this framework and other SDLC methods is that this process reiterates many times before the project is ruled complete versus only once. Although Agile is considered a member of the System Development Life Cycle family of frameworks, this should be very clear at this point that many of the concepts could be used with the project life cycle as well. For instance if providing an end user with video conferencing solution, the project manger may want to focus on the 80/20 rule that often focuses on with the Agile method. With the 80/20 rule, we take the philosophy that 80% of the results come from 20% of the efforts. So, we want to focus on the 20% of the efforts that are going to matter. We may want to only implement the pieces of the video conferencing system that are going to matter most to the end users and then grow the project from there as the company adopts the new technology.

 

Rational Unified Process (RUP)

As different projects have different process needs, the factors that determine these needs usually put the processes into either a formal or an agile process. A Rational Unified Process (RUP) is a Software engineering Process that provides a disciplined approach t software development. Even though this process does rely on some measure of a formal process it does work better with an agile process. This particular process does have the capability to automate certain large portions of the process. RUP acts as a guide when utilizing the unified modeling language (UML). UML does enable the clear communication of architectures requirements, and designs. And as such it can act as a systems life cycle or as a product life cycle in that it can easily handle larger projects which would be either a system (Mainframe), or as a product, (Office 2010). As stated in the first sentence about there being different needs for the various projects, RUP can be easily configured to match many different projects. While each software lifecycles can be broken into different cycles that work of new generations of the product. And each cycle being broken into an iteration (or phase), being; Inception (initiation), Elaboration (planning), Construction (execution), and Transition, (Closing).

 

The open unified process (OpenUP) uses many of the essential characteristics of RUP, but does not provide guidance on many of the topics that many large projects may work with, that RUP does work with. Topics such as: compliance or contractual situations, large teams, mission critical applications, etcetera. OpenUP is a minimally sufficient process where only the fundamental content is included, (Balduino, 2007). Even though OpenUP is an agile process, it is considered lightweight by comparison to RUP, it does apply iterative and incremental approaches within a structured lifecycle that uses cases, scenarios, risk management and architecture type approach that drives development, (Balduino, 2007).

The four core principles of OpenUP are:

  1. Collaborating in order to align the interests and sharing understanding.
  2. Balance an y competing priorities in order to increase the stakeholders value.
  3. Focusing on the architecture early in order to minimize any risks and to organize development.
  4. Evolve in order to promote activities that obtain feedback from the stakeholders in order to improve the development and to give the stakeholders value.

 

The activities would be more in line with a product lifecycle in that it does what it can to promote the project/product by promoting a positive environment.

 

 

Extreme programming

Extreme programming is a type of agile process that is based on values of simplicity, communication, feedback and courage. It was first introduced in the 1990’s at DeimlerChrysler by Mr. Kent Breck in an attempt to find a better way to develop software, (Copeland, 2001). Originally designed to be flexible enough to freely communicate with the customer and to have the coding of the software change (if needed), to fit with the possibility of the changing requirements. One downside to this is the potential for project creep, which can be disastrous to the budget and schedules. Two forms of planning would be a release planning whereas the customer would state what their requirements are and the developers would estimate what the difficulty would be. Another form would be iteration planning where the developers would be given updated direction every two weeks or so with a tested usable segment ready at the end of that time so that it may test by the customer. Extreme programming has been developed as a lightweight development process, where coding has a continuous integration that takes place from every few hours to approximately a day and then tested, if the test fails, only the unworkable portion is removed and fixed (or discarded, then redone). The way that this process is setup, it has the appearance o being more “flowing” than other methods, for more of a continuous development.

 

Rapid Application Design (RAD)

There many approaches that the organization may explore when selecting software development frameworks these methods range from spiral models to waterfall models. The development types may include processes such as adaptive software development when a process follows an adaptive approach providing more freedom; or when collaboration is needed may warrant an agile development approach. Utilizing processes such as the RAD lifecycle model uses an approach in which developers are working with an evolving prototype; this lifecycle involves user collaboration which produces systems quickly without sacrificing quality. Rapid Application Development (RAD) methodology allows developers to develop systems strategically in a cost effective manner while preserving quality utilizing proven development techniques. This application process allows the development cycle to be expedited saving valuable resources. Rapid Application Development tools such as; computer aided software engineering (CASE), JRP (joint requirement planning), and JAD (joint application design) to conduct prototyping and code generation (Schwalbe, 2007). The rapid application development process is an example of a predictive lifecycle meaning that the scope of the project can be seen or articulated. This particular lifecycle model is part of a framework known as the system development lifecycle. However, the system development life cycle has elements that are quite similar to project lifecycle development which include; the assigning of project managers to oversee the entire development process and adjusting scope requirements to satisfy time and cost. There are also conditions that may cause project managers to modify traditional project management methods depending on the particular product lifecycle. The RAD lifecycle model is an example of how elements differ from traditional project lifecycles; mainly because a RAD lifecycle may have an existing functioning product that may warrant additional features and upgrades in addition to spending an enormous amount of time on requirement specifications. There are software development frameworks such as Adaptive Software Development model that provides the functionality requirements that is specified by the stakeholders or business group as the needs are discovered in a more liberal approach. The Adaptive Software Development model follows an adaptive approach because requirements cannot be expressed in a clear manner (Schwalbe). In comparisons with other models like the project lifecycle the scope of a project utilizing the Adaptive model differs because its scope can’t be clearly defined initially versus a project lifecycle which the project scope is set initially allowing stakeholders to have clearly defined vision of the type of product or process they will receive. The key aspects of this approach are that projects are component based and mission driven utilizing time based cycles to meet target dates. These types of projects are usually risk versus scope driven as found within a project lifecycle framework.

 

Conclusions

Overall, the success of the project will depend on how directly Innovative Systems abides by a software development framework.  Innovative systems must be aware that certain software development frameworks will work better than the other frameworks in managing a software development project.  If Innovative Systems correctly executes these frameworks, considerable time and cost savings can be expected. There is not any one hit wonder to software development that will work in every instance. Real results come from project managers with a lot of experience and skill, have knowledge of a lot of different techniques and know when each technique is most appropriate in the right environment to achieve the desired results. Typically, the correct solution is a combination of many different techniques.

 

 

 

 

 

References

Bista, B. (2010).  The Waterfall Model: IT Project Management Solutions. Retrieved June 30,

2010, from http://www.buzzle.com/articles/waterfall-model-it-project-management-solutions.html

Freetutes.com. (2010). Dynamic System Development Method (DSDM). Retrieved June 30,

2010, from http://www.freetutes.com/systemanalysis/sa2-dynamic-system-development-method.html

Parekh, N. (2005). The Waterfall Model Explained. Retrieved June 30, 2010, from

http://www.buzzle.com/editorials/1-5-2005-63768.asp

Select Business Solutions. (2010). What is the Waterfall Model? Retrieved June 30, 2010, from

http://www.selectbs.com/adt/analysis-and-design/what-is-the-waterfall-model

Steele, J. & Swiler. N. (2002). The Software Development Life Cycle (SDLC).  Retrieved June

30, 2010, from http://www.elucidata.com/refs/sdlc.pdf

TechRepublic. (2006). Understanding the Pros and Cons of the Waterfall Model of Software

Development. Retrieved June 30, 2010, from http://articles.techrepublic.com.com/5100-

10878_11-6118423.html

Waters, K. (2007). 10 key principles of agile software development. Retrieved July

1, 2010,from http://www.agile-software-development.com/2007/02/10-things-you-need-to-know-about-agile.html

Kerzner, H. (2009), Project Management: A systems approach to planning, scheduling, and

controlling, (Tenth ed.), Hoboken, N. J., John Wiley & Sons, Inc.

Project Management Institute (2008), A Guide to the Project Management Body of Knowledge

(PMBOK Guide), (Fourth ed.), Newtown Square, PA, Project Management Institute

Project Management Institute (2009), Q & A s for the PMBOK Guide Fourth Edition, Newtown

Square, PA, Project Management Institute

Schwalbe, K. (2007), Information Technology Project Management, (Fifth ed.), Boston, MA,

Thomson Course Technology

Gido, J. & Clements, J. (2006), Successful Project Management, (Third ed.) Mason, OH,

IBM (1998), (rev. 2001), Rational Unified Process: Best Practices for Software Development

Teams, retrieved on 06/29/10 from http://www.ibm.com/developerworks/rational/library/content/03July/1000/1251/1251_bestpractices_TP026B.pdf

Balduino, R. (2007), Introduction to OpenUP (Open Unified Process), retrieved on 06/29/10

from http://www.eclipse.org/epf/general/OpenUP.pdf

Balduino, R. (2006?), Basic Unified Process: A Process for Small and Agile Projects, retrieved

on 06/28/10 from

http://www.eclipse.org/proposals/beacon/Basic%20Unified%20Process.pdf

Wells, D. (2009), Extreme Programming: A Gentle Introduction, retrieved on 06/29/10 from

http://www.extremeprogramming.org/

Wells, D. (2009), The Rules of Extreme Programming, retrieved on 06/29/10 from

http://www.extremeprogramming.org/rules.html

Copeland, L. (2001), Extreme Programming, retrieved on 06/30/10 from http://www.computerworld.com/s/article/66192/Extreme_Programming?taxonomyId=063