Working with management to insure the project’s success!

One of the most critical aspects of any project is that of working with management to insure the project’s success. With this come many concerns that must be addressed. The two most important factors that we will address in this discussion is that of expectations and communication.

The first topic that we will discuss is that of expectations. In order to manage the expectations of the managers within Kucera Clothiers the project manager overseeing the project must first discover and fully understand the managers within the organizations expectations. The PM must then make sure to inform the correct management of the risks that are involved with the project and insure them that those risks are being addressed. One of the best ways to insure that management is happy is by first, not promising anything that you are not sure that you can deliver. The second thing is to make sure that the team exceeds all expectations after the team has met all requirements for the project, in short, “under-promise and over-deliver”. (Bool, 2005)

Now that we have met and exceeded all expectations and managed the expectations of the management staff we must control and deliver a product that all staff members are excited about using. In order to accomplish this, the project manager should first involve management during the design, implementation, and training of the staff. This will provide the management involved a sense of ownership in the project and pride that will be needed to fully support the project going forward. The PM will want to also encourage upper management to be enthusiastic about the change and what it can provide the organization and the lower management’s staff. Next the PM will want to encourage lower management to sell the idea to their staff.

In conclusion, communication is the key to addressing and delivering the expectations of the management staff. Meeting and exceeding those expectations while at the same time doing so with confidence so as to not encourage questions is the key to delivering a new technology into an existing environment.

Bool, H. (2005, December 16). Change Management and Expectations. Retrieved November 7, 2010, from http://ezinearticles.com: http://ezinearticles.com/?Change-Management-and-Expectations&id=114517

Virus or Worm

I chose to try Spybot’s S & D. after the install it started to run. This first step it performed was a registry backup. This is a precaution it takes, as many of the spyware it removes is deeply embedded in the registry and any time you edit and remove items from the registry while not using the uninstall that came with the application you should back up the registry beforehand. The next thing it performed was an update to the application. Next, it is recommended to run the Immunize peace of the application. This will help prevent hooks from spyware to be able to hook into applications such as IE. Lastly I ran the search for problems peace. The result were that there were 3 non-default Windows security settings. These are settings I have changed and I do not want Spybot to change back, so I have uncheck these boxes. The rest of the items that shows up are Tracking Cookies. Tracking cookies are use primarily by online advertisers to keep information about the kinds of things you like to click on while browsing the internet. This information is used for behavioral targeting when you go to the next site that contains ads that that company is responsible for. That way the company can place an ad in front of you that you are more likely to have interest in. This is beneficial to both the end user and the ad company. You are not bombarded with ads that you have no interest in and in turn the ad company increases the effectiveness of their advertizing. This is what my company does. I am a Sr. Network Windows Admin for Adknowledge. Bellow is a screen shot of the results.

The next topic we will discuss is the difference between a virus and a worm. A virus is a piece of code that attaches itself to a file and is then replicated to other files and replicates itself on a system, quite often disabling or at least really impacting the systems overall performance. A worm is very similar in that it is a piece of code that replicates, however a worm tries find vulnerabilities on the network and enter into other systems via those vulnerabilities. Once on that system is searches the network again trying to find a method it can spread. Worm can cause both system performance issues and network performance issues.

The last topic we will discuss is my recommendation to HealthFirst Hospital Foundation with regard to patch management. I would highly recommend to most companies with over 25 pc to look into using Microsoft’s WSUS. This services is very easy to manage all patches to be installed on your network and can be fully automated if need be. This will result in less internet usage as the server will be the only system downloading the patches. The clients will then be pointed to update from this server instead of the windows update website. The server side can control the release of patches to groups of pc. This is a great help with regard to testing the patches before rolling out all patches to all systems within the organization, possibly preventing a company outage due to a patch conflict with one of the other applications that reside on the pcs. The other applications within the network like antivirus applications will need there own server site management. An example of this is Symantec’s Endpoint Protection has a server side peace that will manage and keep up to date all clients on the network. The other nice thing about having such a system is that is can also remotely install and even reinstall the client side app if need be.

References:

Bird, D., & Harwood, M. (2003). Network+ Exam N10-002.Indianapolis: Que Certification.

http://www.computer-lynx.com/a-virus-or-worm.htm, Retrieved December 14th 2008.

http://technet.microsoft.com/en-us/wsus/default.aspx, Retrieved December 14th 2008.

3 Primary Levels of Database Normalization

Bellow you will find a copy of the original table: Grade Report

Student_ID Student_Name Major Course_ID Course_Title Instructor_Name Instructor_Location Grade
168300458 Williams IT ITS420 Database Imp Codd B104 A
168300458 Williams IT ITP310 Programming Parsons B317 B
543291073 Baker Acctg ITD320 Database Imp Codd B104 C
543291073 Baker Acctg Acct201 Fund Acctg Miller H310 B
543291073 Baker Acctg Mktg300 Intro Mktg Bennett B212 A

This table is in an un-normalized form due to the fact that is has repeating groups. For instance the Student_ID = 168300458 shows here twice and Student_ID = 543291073 shows 3 times. So there is no single field that may be used as a primary key for this table.  We will bring this table to 1NF (first normal form) in a moment, but for now let’s look at the dependence of the attributes of this table.

The functional dependencies are shown below:

  • Student_ID -> Student_Name, Major
  • Course_ID -> Course_Title, Instructor_Name, Instructor_Location
  • Student_ID, Course_ID -> Grade
  • Instructor_Name -> Instructor_Location

The dependencies of the attributes can be shown in the following dependence diagram.

To decompose this table into the first normal form (1NF) we will need to expand the primary key to include the both the Student_ID and the Course_ID and will create a composite key. When the primary key is expanded we now have 5 distinct records in the table. The table would now look like this:

Grade Report (Student_ID, Student_Name, Major, Course_ID, Course_Title, Instructor_Name,                 Instructor_Location, Grade)

Next, in order to get the table from a 1NF to a 2NF we will need to get rid of all partial dependencies. These dependencies are found whenever you have a composite key (key made of more than one field). So in short we will create a couple of tables and these table will contain each part of the former composite key so there will no longer be a composite key.

Students

Student_ID Student_Name Major
168300458 Williams IT
168300458 Williams IT
543291073 Baker Acctg
543291073 Baker Acctg
543291073 Baker Acctg

Courses

Course_ID Course_Title Instructor_Name Instructor_Location
ITS420 Database Imp Codd B104
ITP310 Programming Parsons B317
ITD320 Database Imp Codd B104
Acct201 Fund Acctg Miller H310
Mktg300 Intro Mktg Bennett B212

Grades

Student_ID Course_ID Grade
168300458 ITS420 A
168300458 ITP310 B
543291073 ITD320 C
543291073 Acct201 B
543291073 Mktg300 A

These tables can also be shown as:

Students (Student_ID, Student_Name, Major)

Courses (Course_ID, Course_Title, Instructor_Name, Instructor_Location)

Grades (Student_ID, Course_ID, Grade)

Foreign key: Student_ID to Students table

Foreign key: Course_ID to Courses table

We still have an issue that were a user could enter the Instructor_Name or Instructor_Location incorrectly and there could be a data mismatch. This is referred to as a data anomaly. For instance if a user was to in one record saw the instructor Codd was in location B104 and in another record indicate that Codd was in location B317, there is nothing to prevent this to happen. To prevent this kind of anomaly we will need to pull the Instructor_Name and Instructor_Location out and create a 4th table. In the Course table I will create a new field called Instructor_ID and this will be a foreign key to the Instructors table; due to the fact that Instructor_Name is too vague and could reference two instructors with the same name. In that 4th table the Instructor_ID will be the primary key.  The newly created database will now look like this and are now in a 3NF form:

Students

Student_ID Student_Name Major
168300458 Williams IT
168300458 Williams IT
543291073 Baker Acctg
543291073 Baker Acctg
543291073 Baker Acctg

Courses

Course_ID Course_Title Instructor_ID
ITS420 Database Imp 1001
ITP310 Programming 1002
ITD320 Database Imp 1001
Acct201 Fund Acctg 1003
Mktg300 Intro Mktg 1004

Grades

Student_ID Course_ID Grade
168300458 ITS420 A
168300458 ITP310 B
543291073 ITD320 C
543291073 Acct201 B
543291073 Mktg300 A

Instructors

Instructor_ID Instructor_Name Instructor_Location
1001 Codd B104
1002 Parsons B317
1003 Miller H310
1004 Bennett B212

These tables can also be shown as such:

Students (Student_ID, Student_Name, Major)

Courses (Course_ID, Course_Title, Instructor_ID)

Foreign key: Instructor_ID to Instructors table

Grades (Student_ID, Course_ID, Grade)

Foreign key: Student_ID to Students table

Foreign key: Course_ID to Courses table

Instructors (Instructor_ID, Instructor_Name, Instructor_Location)

Bellow you will find a Chen’s Model ER diagram of the 3NF tables:

References:

Adamski, J., & Finnegan, K. (2008). Microsoft Office Access 2007. Boston: Course Technology.

http://www.sethi.org/classes/cet415/lab_notes/lab_03.html, retrieved May 1, 2009

7 Basic Steps to Designing a Relational Database

There are 7 basic steps to designing a relational database. There are several other steps that could be included in the database implementation and the database life cycle like, research, rollout and maintenance. However, I will be focusing strictly on the database design.

 

The seven basic steps are; first, determine the purpose of your system. Next, you will need to determine the entities for your system. These entities will become your table names. Next you will decide what attributes for those entities will be necessary for the purpose of the system. These attributes will become your fields within the tables. Next, you will need to determine what attributes/field names will help identify the records/entities as being unique. This field(s) will make up the primary key for each table. After you have determined the primary keys for the tables you will need to define the relationships between these tables. The best way to truly do this is to follow the standard steps in table normalization and at minimum get the table to a 3NF form. The process of normalization is the sixth step in the database design. The 7th, and final step, in the database design in the actual data population.

 

In short the key steps are;

  1. Determine the purpose of the system
  2. Determine what entities/tables will be included
  3. Determine what attributes/fields will be included
  4. Indentify unique fields (primary keys)
  5. Determine relationships between tables
  6. Refine the design (normalization)
  7. Populate the tables with raw data

 

References:

http://www.databasedev.co.uk/database_design_requirements.html, retrieved May 1, 2009

 

Adamski, J., & Finnegan, K. (2008). Microsoft Office Access 2007. Boston: Course Technology.

 

Database Table Relationships

The first type of relationship we will discuss is the one-to-one relationship. A one-to-one relationship is where an entity from within one table references one and only one entity within another table. For instance, within most offices you will find users that have laptops. This relation is typically found as a one-to-one relation as follows, one user per one laptop. Users do not typically share laptops like they may with standard desktops. The users would be within one table and the laptops would be found in another table if they were represented within a relational database. This relation can be seen as follows:

 

Users (User_ID, First_Name, Last_Name, Phone, Service_Tag)

Foreign key: Service_Tag to Laptops table

Laptops (Service_Tag, Brand, Model, CPU, Memory)

 

The second type of relationship you will find within a referential database is a one-to-many relationship. A one-to-many/many-to-one is all a matter of perspective. With a one-to-many relationship there is one entity within one table that relates to many entities within another table. My example here is something that you will commonly find within a small office, 5-10 users. Typically a small office will only have 1 networked printer and many users will use that printer. This relationship can be shown in the following diagram.

 

Users (User_ID, First_Name, Last_Name, Phone, PrinterID)

Foreign key: PrinterID to Printers table

Printers (PrinterID, Brand, Model, Location)

The last relationship I will show is the many-to-many relationship. With a many-to-many relationship you will find many entities within one table that relate to many entities in another table. To do this properly you need what is called a intersection table, junction table or a link table. I personally use the term link table. This link table is used to split up the many-to-many relation and will create two one-to-many relationships this greatly simplifies the logic within the database and can help prevent issues that may come up latter down the road. This helps enforce referential integrity. Let’s take the previous example with the small office and the single printer and many users and expand this to a larger office with many printers. For instance, with an office of 100 users there is likely going to be at least 1 printer for every 20 or 25 users. In addition to this each user may print to more that or printer depending on what they want to print, some printer may be color while others only black and white. This example can be show as such:

 

Users (User_ID, First_Name, Last_Name, Phone)

PrinterUserLink (User_ID, PrinterID)

Foreign key: User_ID to Users table

Foreign key: PrinterID to Printers table

Printers (PrinterID, Brand, Model, Location)

 

References:

Adamski, J., & Finnegan, K. (2008). Microsoft Office Access 2007. Boston: Course Technology.

 

Project Management Risks

There are numerous risks that have the potential of causing a project to be looked upon as a failure. The primary categories of these risks according to the PMBOK are: Technical, External, Organizational, and Project Management. (Project Management Institute, 2008, p. 280) Within each of these primary categories reside many subcategories. Within the Technical risk type we have the subtype of Requirements, Technology, Complexity and Interfaces, Performances and Reliability, and Quality. Within the External risk category we find the sub types of Subcontractors, Regulatory, Market, Customer, and Weather. We find under the Organizational category sub types like: Project Dependencies, Resources, Funding, and Prioritization. Lastly, within the Project Management category we find the sub types of Estimating, Planning, Controlling and Communication. (Project Management Institute, 2008, p. 280) There are several other categories and sub categories that you may find within other Risk Breakdown Structures (RBS), however this is the list that he Project Management Institute has deemed noteworthy for general project management risks. Creating and RBS is almost a necessity to any project. It reminds those involved what Risk Identification areas to think about. (Ritter, 2008)  Now let’s describe these a little more.

One of the primary risk categories are technical risks. Again these include risks such as technology its self not being able to perform the way we had expected. Another example would be the technology that is required to complete a project like a router to setup a network may be found as dead on arrival to the new site. Yet another example that we find within this category would be potentially a system not being able to meet all the requirements as expected and needed to complete the project.

External factors may, and should, also be viewed as risks. These include factors such as subcontractors not showing up for work or not having the real experience and knowledge in order to fulfill the requirements of the project within a timely manner. Another example that I personal ran into two weeks ago for a site more in NYC was that in NYC there are many property codes that needs to be signed off by larger property managers that move very slow. In order to work with them in a timely manner often times requires knowing the right people that can get the job done. Another factor that would fall into this category that we could have faced during this move would have been weather conditions. Often it is not a good idea to move computer equipment between sites in the rain. Luckily, it did not rain until the day after the move.

The next category we will discuss more in-depth is that of Organizational risks. An example of an organizational risk that many of us find within project is that of needing a deliverable from one project for the current project. If that project is running behind with their timeline, that will affect your project’s timeline, unless it is properly accounted for. Another example we find within this category is that of funding. A manger or CEO may decide due to poor financial status of the company that your project now only receives 70% of the projected funds and yet you are still required to produce the same quality results. Most often in order to accomplish this you may be required to take more time.

The last main category that the PMBOK discusses is that of project management itself. This categories is a big one if not properly accounted for and that about the project may be doomed from the start. Within this category we find tasks such as estimating of budgets, time, materials, etc. We also have tasks such as controlling and monitoring the status of the project and the scope of the project that fall into this category. Proper planning of these and other tasks as well as proper communication with the project team members and stakeholders are some of the most critical sub categories we find within this category.

These risks need to be managed by first accounting for them. Secondly, after you have accounted for the risks you need to decide the potential risk that they bring to the project by ranking them by first how likely they are to occur and secondly how much the impact to the project they will cause if they occur. Then we will create contingency plans for those risks that are more likely to occur and more likely to cause a great deal of trouble for the project if they occur.

There are many methods to deal with risk. These are first to avoid risk. In order to accomplish this it requires to typically change the project plan. If we cannot fully avoid the risk we may wish to at minimum mitigate the risk so that it is less likely to occur or will be less likely to fully disrupt the project. Another option would be to transfer that risk to an outside organization via consultants. This method in my opinion is not always a good option as the end result still may be you without a completed project and you do not have control any longer over those consultants. The last option that you have it to not avoid or mitigate or transfer, simply accept the risk and plan for it to happen if it does. (Project Management Institute, 2008, p. 304)

Project Management Institute. (2008). A Guide To The Project Management Body of Knowledge. Newtown Square: Project Management Institute, Inc.

Ritter, D. (2008, October 11). Risk Breakdown Structure (RBS). Retrieved August 22, 2010, from Project Management: http://certifedpmp.wordpress.com/2008/10/11/risk-breakdown-structure-rbs/

Cost Controlling Measures and Cost Forecasting

Cost Controlling Measures

A project can be made more successful with the help of typical project controlling measures.  According to Crump (2010), these measures need to be incorporated into the planning and execution of the project.  Tracking, reviewing and regulating the progress and performance of the project are some standard processes that are included in the “monitoring and controlling” process group.  These processes will identify necessary changes and initiate those changes in the project plan.

The performance of the project is monitored and assessed frequently during the monitoring and controlling processes, which can help in identifying issues in advance.  The ongoing regular monitoring provides the team and project manager with data concerning the status of the project and identifies areas where further data is required.  Besides monitoring and controlling the work being completed during each project phase, the “monitoring and controlling” process group monitors and controls the entire project.

Each project is required to be checked thoroughly at recurring points all through the lifecycle to make sure that the necessary technical performance takes place on schedule and within the budget that was approved.  Measures that aid in controlling cost, schedule and quality are developed by project managers to gauge the success of a project.  Formal and informal reports are assembled from the results of these measurements.  The project team and senior management use the reports to spot differences between planned performance and actual performance and identify the causes. Actions are then taken to limit the effects of these variations on the project’s resources, schedule or budget.

All projects consist of three key factors. These factors are time (schedule), cost, and quality.  Time is set by the start and end dates, cost is permitted by the budget of the project and quality is prescribed by the client.  According to Dube (1999), controlling measures are used by project managers to make sure the project stays within these limits, and are known as the “triangle of objectives”.

Project schedules can be controlled through the use of effective tools such as Gantt charts which provide a clear picture of the project’s current state.  These charts are easily understood despite the huge amounts of information that they contain.  Gantt charts require updating frequently but are maintained easily as long as major revisions to the schedule are not made or task requirements are not changed.  The use of milestone reporting is another way of controlling a project’s schedule.  A milestone is a key event that leads to achieving an objective of the project.  The life of the project is provided in a sequential list of critical events.  Control measures for project schedules are developed during the planning phase.  These schedules are monitored and updated accordingly during the execution phase of the project.

Concerning controlling measures, many managers consider the associated project costs to be more vital than the schedule, notes Dube (1999).  Senior management requires appropriate cost status reports.  Throughout the planning stage, status reports takes the form of developing estimates for project cost, which are added to the initial budget of the project.  When the project begins, the data can be forwarded as part of cost schedule or cost control reports.  The team can develop cost-minimizing reports if it considers crashing or accelerating the project.

Cost scheduling is another controlling measure.  During the planning phase of the project, the planned costs are matched to the schedule of the project through cost reports.  Financial officers are permitted to construct a financial plan so that appropriate funds can be made available to support the requirements of the project (Dube, 1999).

During the execution phase, cost control compares scheduled expenses to actual expenses using cost control reports.  The report’s function is to predict or spot probable cost overruns.  When a cost overrun is expected, a request for additional funds is sent as soon as possible to senior management.  If obtaining the additional funding cannot be done quickly and the cost overrun is greater than the financial tolerance of the project, additional financial obligations should not be made until a comprehensive project cost analysis is completed.  Dube (1999) mentions that even if this shortage of financial commitment may look somewhat harsh, it is the best way since project bankruptcy is prevented which would make completing the project impossible.

Another controlling measure used during planning or execution is cost minimizing.  Cost minimizing is shortening the project’s life with the smallest possible additional expense.  The project team determines if a project should be crashed or accelerated by first considering the cost of extra equipment, manpower and overtime.  Dube (1999) states the cost of getting the work done using the existing schedule will be compared to the “crashing” cost by the project team.

Throughout the project, quality control is also performed.  These controlling measures monitor and record the execution results of quality activities in assessing performance and recommending required changes (Kerzer, 2009).

Overall, due to the complexity and cost of projects, organizations need a system of control that can efficiently measure the progress of the project.  Selecting effective controlling measures involves contributions from the senior management and project team.  The organization’s goals should be reflected by these measures.  Status reports, which detail the cost, schedule, and project activity’s performance, are pulled together from the result of these measures.  Detailed scheduling and cost control techniques will assist organizations to analyze potential delays or cost overruns during execution to allow for corrections before it is too late.

Forecasting Techniques

In order to plan the budget for a project, many costs are not known in advance.  Forecasting is used in order to estimate what these costs will be.  Forecasts can never be completely accurate, but can provide a good idea of what to expect if the appropriate techniques are used.  Some forecasting techniques are less accurate than others but are used to start off planning.  When more detailed data becomes available, more accurate forecasting techniques can be used.  What follows is a survey of forecasting techniques and their benefits and limitations.  The techniques are divided into four major areas: ad hoc, numerical/statistical, simulation, and earned value analysis.

Ad Hoc Forecasting Techniques

Ad hoc forecasting techniques are simple to implement and available without detailed data.  The limitations of ad hoc techniques are that the estimates are usually only accurate between plus or minus fifteen to thirty-five percent within the project scope.  They can be biased and are subjective, making them suspect and less reliable techniques that use hard data (Kerzner, 2009).

There are several ad hoc forecasting techniques.  The following give a short description of a few these techniques:

  1. Rough Order of Magnitude (ROM) – A very rough estimate using techniques such as expert judgment, analogous and three point estimates.
  2. Expert Judgment – Using experience to come up with an estimate.
  3. Rule of Thumb – Personal rules from experience are used to compare with estimates.  If the difference between rule of thumb and estimate is large, then the estimate and the underlying assumptions need to be re-examined (Pandian, 2003).  They provide a sanity check.
  4. Analogous – Using projects of a similar scope and capacity to come up with a rough estimate.
  5. Three point estimates – Making a most likely, optimistic, and pessimistic estimate and then adding all three after multiplying the most likely estimate by four and then dividing the total by six to come up with an average.  Using this equation helps to remove bias from the most likely estimate (Pandian, 2003).

Numerical/Statistical Forecasting Techniques

Numerical and statistical forecasting techniques use hard data, usually historical data, for making a forecast.  Although these techniques provide some of the most accurate forecasts, they have problems as well.  These forecasting techniques require detailed information and they can be expensive and time consuming.  Oftentimes relationships between different variables have to be determined (Kerzner, 2009).  If an important relationship is not identified or if the details of the relationship are incorrect, then the forecast will be less accurate.

There are several numerical and statistical forecasting techniques.  The following give a short description of a few these techniques:

  1. Moving average – Taking the value of a chosen number of previous periods to use as an estimate for the next period (AIU Online, 2010).
  2. Weighted average – The same as moving average except giving greater weight to more recent periods (AIU Online, 2010).
  3. Linear regression (Least Squares) – A mathematical algorithm is using past data to create a line showing the direction.  The line can then be carried forward to create a forecast (AIU Online, 2010).
  4. Exponential Smoothing – Combines most recent actual figure with the previous period’s forecast in order forecast an upcoming period (AIU Online, 2010).
  5. Regression analysis – Equations that are used to analyze the relationship between a dependent variable and one or more independent variables.  One of the independent variables can be changed to see how it affects the dependent variable.  For example, how does the cost relate to the size of the house if we change from using laminate to wood flooring?  Using only one independent variable is not very realistic because reality is not two-dimensional, so multiple independent variables are desirable.
  6. Time series – Making use of the fact that recent history is a good predictor of the near future (AIU Online, 2010).
  7. Trend analysis – Finding trends in historic data that can be used to make forecasts.

Simulation Forecasting Techniques

Simulation forecasting techniques allow users to change various conditions and factors about a specific event or situation to arrive at the prediction of future conditions.  These simulation techniques include using simple to complex model systems.  The main limitation of simulations is if the initial assumptions are incorrect, then they will be magnified as the simulation is run (Walonick, 1993).

There are several simulation forecasting techniques.  The following give a short description of a few these techniques:

  1. Monte Carlo Simulation – It is also known as probability simulation.  A simulation is run over and over again until the probability that any particular outcome will occur can be determined (What is Monte Carlo, n.d.).
  2. Mathematical Analog – Using an equation to simulate a project’s costs (Walonick, 1993).

Earned Value Analysis Forecasting Techniques

In Earned Value Analysis, earned value (EV) is a number that is equal to what was budgeted for work that has already been performed.  EV is used in conjunction with other values, such as planned value, actual cost, and the baseline of the project, to determine if the project is on budget or not and if corrective action needs to be taken (Kerzner, 2009).  The main limitation of earned value analysis is that the budget must be close to reality or you will always be off track.

Individual Forecasting Techniques

Different classes of forecasting techniques are used as a project progresses through planning all the way to closing.  Ad hoc methods are used early in a project before detailed data is available.  As a project moves through planning, numerical/statistical and simulation forecasting techniques are used to produce more accurate forecasts as sufficiently detailed data becomes available.  During the execution phase of the project earned value analysis is used to keep a project on track and again during the closing phase to provide performance reviews and variance analysis to improve future projects.

An example from each class of forecasting follows with the details for implementing that technique.  Rough order of magnitude is used to represent ad hoc methods.  For the numerical/statistical class exponential smoothing is detailed.   Monte Carlo simulation is used to represent simulation forecasting techniques.  Finally, how to use earned value analysis is covered.

Rough Order of Magnitude

Rough order of magnitude or ROM is just a fancy way of saying, ballpark estimate (Business Dictionary, 2010).  Project managers might not have hard data to produce an accurate budget during the initiation phase of a project.  Therefore, rough estimates must be taken in order to determine about how much money is needed to complete the project.  Estimates can be fairly accurate if the information gathered for the estimate came from previous projects.  ROM is a tool that is used for future application development lifecycle costs based on past projects (IBM, 2006).

A rough order of magnitude can be calculated using a variety of tools and techniques.  The following give a description of a few tools and techniques:

  1. Expert judgment – A well seasoned project manager will have plenty of experience working on various projects.  This project manager will be able to provide information on cost based on prior similar projects.  This cost estimate will be more reliable over someone that has not worked on many projects.
  2. Analogous estimating – If a project to build a mile long road costs $10,000, chances are a mile long road will cost about the same in another area.  This method of estimating takes information from past project such as scope, cost, budget, and duration to calculate how much the current project will cost (PMBOK, 2008).
  3. Three-point estimates – This method of calculating a rough estimate originated with the PERT program (PMBOK, 2008).  It takes the most likely cost, the optimistic cost (best case scenario), and the pessimistic cost (worst case scenario) and calculates the expected cost by adding all three costs and dividing the answer by six.  This method will actually give project managers a range of four estimates to work with.
  4. Vendor bid analysis – All projects will need resources that should be known by the project manager.  The project manager can then get a ball park estimate of the project based on what the selected vendor will charge for the resources needed.
  5. Project management estimating software – When all else fails, rely on software to get the job done.  Many companies are concerned with cost and have developed software to perform calculations to come up with the best cost effective solutions.  So when time is short and a cost estimate is needed, find the best the cost estimating software.

A rough order of magnitude estimate is a tool that is used by project managers to get an idea of how much the project will cost.  Prior experience and historical data will play a role in ROM.  The more experience a project manager has and the more data available from similar projects, the better the chances of creating an accurate estimate.  Estimates with greater accuracy using hard data are used as more information becomes available, but ROM provides a starting point for project managers to begin planning.

Exponential Smoothing

Within the category of numerical/statistical forecasting, we find many techniques such as weighted smoothing, linear regression, curve fitting, decomposition and exponential smoothing. All of these methods look at the trends of the past and then use mathematical techniques to attempt to predict the future. In order to perform these mathematical techniques, we commonly assume that whatever caused the trend of the past will still be at play with a similar strength in the future. This is typically a good assumption when forecasting the near future but as the forecasted time increases, it becomes less true (Walonick, 1993).

In order to fully explain the statistical forecasting technique of Exponential Smoothing, we must first discuss the subject of Time Series. A Time Series is simply any trend that provides data over a given time. Whenever a person hears the term “trend”, we know they are discussing a Time Series. In order to allow the reviewing of Time Series charts to be truly effective and not simply a spiky random chart that is almost unreadable, we must use some form or another to “smooth” the chart out so that we may see true trends within the data. In order to complete this task, we often use techniques such as Floating Averages and Exponential Smoothing. With these techniques in short, we are making the attempt to fill in the potential points that could have been within the points of data that were collected. A few of the downsides when using techniques other than Exponential Smoothing are that other methods do not always take into account the first and last point within the data, the calculation is tedious and not very easy to change, and lastly floating average makes it difficult to truly pull a trend. Exponential Smoothing addresses each of these limitations that are found within other statistical forecasting methods (Janert, 2006).

Exponential Smoothing can be found in three different forms. The first is referred to as single, the second is referred to as double or Holt, and the third is commonly known as triple or Hold-Winters Exponential Smoothing. The first method finds an approximation for the data discovered between the values or random data. The second method allows for a linear trend to be extracted. However, the third form of Exponential Smoothing also takes into account regularly reoccurring changes within the data (common seasonal trends) (Janert, 2006). In other words, if the client is regularly utilizing such data in order to predict the needs for funding within the project that has occurred over a given time and they have season data that could contribute to the adjusting of the predictions, then the third form of Exponential Smoothing may be the technique that would be best for the analysis. The basic equations for their method are given by: (Engineering Statistics Handbook, 2010)

 

  • y is the observation
  • S is the smoothed observation
  • b is the trend factor
  • I is the seasonal index
  • F is the forecast at m periods ahead
  • t is an index denoting a time period

Exponential Smoothing is also easy to adjust, and in such, it is easy to produce many similar forecasts within a short time frame (Statistical Forecasting, 2006). In conclusion, Exponential Smoothing assists in creating forecasts by combining the actual data and the forecasted data of the prior period. Exponential Smoothing can be changed quickly on the fly. This forecasting technique is one that should be commonly considered when attempting to forecast inventory control and material scheduling.

Monte Carlo Simulation

When forecasting and attempting to predict the future needs of the project, assumptions need to be made. For instance, the cost of the project over time or the number of man hours needed for a task. By utilizing prior collected data and introducing your own experience into the estimate, you may be able to increase the accuracy of that estimate. However, there are other variables that have not been taken into account, including risk. In order to accurately incorporate this risk and produce a good probability that your estimates are as accurate as possible, the project manager must estimate based on a range of possible outcomes. By utilizing this range of possible outcomes, the PM can develop a more complete view of what may happen in the future. When creating a forecast by using a range of estimates, the total estimate will also be a range. For instance if for each task within the project we produce minimum, maximum and potential estimates of time or budgetary needs, the total for the project will include a minimum, maximum and potential estimate of the time and budgetary needs (RiskAMP.com, 2010).

The Monte Carlo project was founded by the government during the creation of atomic weapons. John con Neumann found out that if many random samples are compiled together they will create a vision that is not random at all and quite accurate (Vargas, 2004). The form of risk assessment that the Monte Carlo Simulation will attempt to make clear is that of uncertainty within the project. In short, the Monte Carlo Simulation Forecasting technique is used to let you, the project manager, know the likelihood a result will be of a project time line, budget, etc. This method works like this. First, the project manager would estimate the minimum time frame for a task within a project. Then, the PM would estimate, based on experience, the potential maximum time it will take for that task. Next, he or she will decide, based on experience, what the most likely time that it will take. He or she will repeat this process for all tasks within the project and then simply add up the minimums together. This would be the minimum time for the project. Then, the PM would do the same for the maximum and the most likely time frames to get the total maximum time and the most likely time that it will take to complete the full project. Now we know what the project’s minimum time for completion and what the maximum time frame for completion of the project could be. Now here is where the Monte Carlo Simulation really comes into play. The simulation is most often completed by a computer that will reiterate the tasks of estimating up to several hundred times. As the number of times increases, we become more comfortable that the estimations are accurate and we can provide with relative certainty that our estimates are as accurate as possible (RiskAMP.com, 2010).

Earned Value Analysis

Earned value (EV) analysis is a time series analysis that is used for forecasting project performance (Haughey, 2010).  EV analysis is used to determine if the project is going according to plan or if the project is getting off track.  In addition, EV analysis will give project managers an indication of how much money and time have been spent in comparison to where the project is in term of progress.  This is particularly important when it comes to preventing the project from either running out of time or money.

In order for the project manager to create an earned value analysis model, he will have to first create a Work Breakdown Structure and assign a budget to each work package to the lowest level (Haughey, 2010).  The project manager must then create a schedule showing how long it will take to complete the project.  As a section is completed, the work is earned and values are placed on the schedule.  When a section of the project is completed, the work completed is compared to the schedule that was created.  At this point, the project manager can determine if the project is on target or deviating from the plan.  Project managers can then make the necessary decisions early enough in the project before it is too late to put the project back on track.

Earned value analysis requires a little more than just comparing a task to the schedule.  There are mathematical equations to help project managers complete an earned value analysis.  Rupen Sharma (2010) states that the following should be used for an earned value analysis:

  1. Estimate at completion (EAC) – determines the total cost of the project after completion based on the current progress of the project.
  2. Estimate to complete (ETC) – determines how much money will be needed to complete the project based on the current progress of the project.
  3. Variance at completion (VAC) – compares the total cost at the end of the project to the project budget.
  4. To-Complete Performance Index (TCPI) based on budget at completion (BAC)determines what future projects should implement in order to meet budget goals.
  5. TCPI based on EAC determines what future projects should implement to meet the project’s cost based on past project performance.

Now to put all this into a scenario to make sense of it all, let’s say there is a company developing a project known as Project X.  The planned budget for Project X is $10,000 a month for eight months.  During some analysis, the project manager notices that the project is 30% complete and it has consumed $40,000 after just two months.  This does not automatically mean the project is spending too much since the first few months may require more expenditures than the final months.  The project manager must then perform a few calculations to find the planned value and the earned value by first finding the BAC, actual cost (AC), planned completion, and actual completion so it can be determined if the project is on budget or not.

The BAC will simply be the planned budget of $10,000 per month for 8 months which equals $80,000.  The actual cost so far in the project after two months is $40,000.  The planned completion is the months completed so far divided by the total planned months.  So this will be 2/8 = 0.25 or 25% of the project completed.  The actual project completed so far is 30%.  With all this information, the project manager can then find the planned value, earned value, and cost performance index.

Planned value (after two months) = 25% * $80,000 = $20,000

Earned value (after two months) = 30% * $80,000 = $24,000

Cost performance index = $24,000/$20,000 = 0.6

If the analysis is stopped at this point, the project manger can see that the project will be over budget.  To get precise numbers of just how over budget the project is, the project manager will then have to find the EAC, ETC, and VAC at a minimum.

Estimate at completion (EAC) – $80,000/0.6 = $133,333.  This amount represents what the project will cost at the end.

Estimate to complete (ETC) – $133,333 – $40,000 = $93,333.  Since $40,000 has been spent so far on the project, it will require another $93,333 to complete.

Variance at completion (VAC) – $80,000 – $133,333 = -$53,333.  This is difference in the actual cost of the project versus the planned cost.

The earned value analysis can be a great tool for project managers to monitor the progress of a project.  It will require some mathematical calculations, but when used properly earned value analysis can spot potential problems with the project in the early stages.  This gives project managers enough time to react to the situation and make necessary course corrections.

Conclusion

Many organizations do not implement proper cost controls on projects to catch when project costs are deviating from the plan so corrective action can be taken.  Forecasting is also not properly used to avoid common problems with foreseeable costs.  As organizations properly implement cost controls and accurately forecast project costs throughout planning and execution, projects will stay on budget and be able to make corrections when necessary.

References

AIU Online. (2010). IPM631: Unit 4: Forecasting for decision making [Multimedia presentation]. Retrieved August 9, 2010, from AIU Online Virtual Campus. Technical Project Leadership: IPM631- 1003B-01 website.

Business Dictionary. (2010). Rough order of magnitude. Retrieved August 13, 2010, from

http://www.businessdictionary.com/definition/rough-order-of-magnitude.html

Crump, K. (2010). IT Project Controlling Measures. Retrieved August 10, 2010, from http://www.pcdocsac.com/blog/?page_id=24

Dube, L. (1999). Establishing project control: Schedule, cost, and quality. Retrieved August 10, 2010, from http://www.allbusiness.com/human-resources/workforce-management/342527-1.html

Engineering Statistics Handbook. (2010). Triple exponential smoothing. Retrieved August 14, 2010, from http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc435.htm

Haughey, D. (2010). What is earned value? Retrieved August 12, 2010, from http://www.projectsmart.co.uk/what-is-earned-value.html

IBM. (2006). Method and application for creating rough order of magnitude (ROM) cost estimates and summarizing the results by application, lifecycle phase, and project months. Retrieved August 13, 2010, from http://priorartdatabase.com/IPCOM/000138907

Janert, P. (2006). Exponential smoothing. Retrieved August 14, 2010, from http://www.toyproblems.org/probs/p02/

Kerzner, H. (2009). Project management: A systems approach to planning, scheduling, and controlling (10th ed.). Hoboken, NJ: John Wiley & Sons, Inc.

Pandian, R. (2003). Software metrics: A guide to planning, analysis, and application. Boca Raton, FL :CRC Press.

Project Management Institute. (2008). A guide to the project management body of knowledge: (PMBOK guide) (4th ed.). Newtown Square, PA: Project Management Institute, Inc.

RiskAMP.com. (2010). What is Monte Carlo simulation? Retrieved August 14, 2010, from http://www.riskamp.com/files/RiskAMP%20-%20Monte%20Carlo%20Simulation.pdf

Sharma, R. (2010). Examples of using earned value analysis for calculating project performance. Retrieved August 12, 2010, from http://www.brighthub.com/office/project-management/articles/57945.aspx

Statistical Forecasting. (2006). Statistical forecasting methods. Retrieved August 14, 2010, from http://www.statisticalforecasting.com/forecasting-methods.php

Vargas, R. (2004). Earned value probabilistic forecasting using Monte Carlo simulation. Retrieved August 14, 2010, from http://www.ricardo-vargas.com/articles/earnedvaluemontecarlo/

Walonick, D. (1993). An overview of forecasting methodology. Retrieved August 14, 2010, from http://www.statpac.com/research-papers/forecasting.htm

What is Monte Carlo simulation? (n.d.) Retrieved August 13, 2010, from http://www.riskamp.com/files/RiskAMP%20-%20Monte%20Carlo%20Simulation.pdf