Reviewed “Digital Communications Test and Measurement: High-Speed Physical Layer Characterization”, edited by Derickson and Muller, 2007, 910pp. The actual procedures for these tests can be described in testplans and results, or in common documentation modules that can be referred to by these. It may be possible to customize the analysis software configuration and reporting for particular applications. Actual instrumentation model and serial numbers would need to be listed since there may be differences in calibration or probes. It is useful to be able to repeat the results in the case of debugging or regression tests.
Archive for December, 2007
Posted by cadsmith on December 27, 2007
Posted by cadsmith on December 22, 2007
Review of Preface, 2007. Since the rush was on to turn in reports prior to the holiday break, thought it would be interesting to publish some results on Facebook. (The site currently requires login to read notes.) It is a form of living document. This caused some reorganization on other sites to get the tags lined up. Since a dozen sites are mashed, the usual indexing tools are not applicable. Manually editing a list is good for a snapshot, but is rapidly out of date as soon as any content changes. Settled on using a tag cloud. This does not include each proper name or keyword yet so added a search box to cover the portal sites. As online publishing progresses, more of the content editing, style and indexing will probably be augmented. With respect to testing, specifications are based on proven formats, standards as well as R&D, so some of this may be useful for organizations and open-source communities. Another step would then be to organize the notes and reviews into a Facebook digest.
Posted by cadsmith on December 21, 2007
In Super Crunchers, Ayres, 2007, 260pp, number crunching has advanced enough in size, speed and scale to surpass intuition for realworld decision-making in many cases. This has beneficial effects since the accuracy of data mining, analysis, and predictions is increased because the machine is more capable than humans of determining causal weights in equations. Kryder’s law states that the storage capacity of drives doubles every 2 years. Large stores of medical records aggregate data for realtime epidemiology. Risk assessment and management are implemented through information integration. Randomized testing allows access to new data beyond regression, e.g. alternating between a couple of ads with the more popular being emphasized more as time goes on. People are rewarded for certain types of behavior, e.g. the most profitable customers of a firm, or prevented from other types as in law enforcement. Intuitions complement statistics since people are good at hypothesizing and excluding potential factors. Determining types of data or factors that are missing often requires intuition. Both approaches can also be combined, e.g. systems knowing when certain types of data are best handled by humans, or experts basing decisions on results of statistical tools. This may also have undesirable effects. “Data huggers” are concerned about privacy. “Smart dust” or other forms of nanotechnology will be used for ubiquitous surveillance. Sellers have the advantage of systems that second-guess buyers to maximize profits. Front-line workers lose discretionary choices. Super crunching will change expertise from traditional practices to predictability. Author site. The author emphasizes that these systems can also conduct experiments through randomized tests where selections are made by chance from a set of options. The effects can be measured or additional steps can affect the sample population and be compared to a matched distribution to determine the results. For example, a web page might use a different icon style or placement in each use to determine which yields the highest likelihood of a desired outcome. This might challenge the axioms of usability experts. There can be more variables than a person would usually deal with and many observations since the data analysis is automated and instantaneous. Human creativity still has a role to play in the choice of which options to provide.
Posted by cadsmith on December 19, 2007
The Myths of Innovation, Berkun, 2007, 192pp, claims that learning to innovate involves dispelling the notion that there is a single approach that yields results. Research is compared to popular stories to locate myths. The author attempts to understand the conditions and constraints at the time of the innovation in order to derive how it occurred, though the way that people innovated is often not a matter of record, and it did not happen in a straight line. Good ideas seem plentiful, but there are secondary factors which prevent the best ones from taking hold. A lot of work precedes an epiphany since defining a problem is not achieved in a single pass. Brainstorming deals with facts, ideas and solutions. Acceptance may be delayed by decades or more. Artifacts such as the wonders of the modern world are still being analyzed for clues. The individual methods of innovation vary since each person had to get past the agreed upon conventions of the time, or the inertia of the status quo which is resistant to change or risk. Ideas from various sources are mixed and it usually takes some collaboration for innovative results. Though it is unpredictable, there are measures of goodness, and acceleration is continuous. Human nature seems to differ from idealistic techno-evolutionism since there is a principle that ideas are rejected on feelings rather than merits. Innovation is not always necessary since some traditions are not easily replaced. Managing it also has several challenges which are discussed. This book has an innovative bibliography which is ranked by significance as well as by order of appearance. Author site. Video for large-company audience (emphasis may be different for other types of organizations, e.g. startups or crowd-source). With respect to testing, one of the factors identified for acceptance of innovation is trialability. For example, tea bags were invented as samples of large tins and eventually became a product in their own right. Measurements occur to determine if originating problem is solved and whether new problems are created, and for who.
Posted by cadsmith on December 13, 2007
Catastrophe Disentanglement: Getting Software Projects Back on Track, Bennatan, 2006, 288pp, has courseware about rescuing software projects from failure using a ten-step process that takes two weeks to get back on track. Each chapter also has sections for what can go wrong (and what to do about it) and exercises. Alarms can be set to warn of project derailment so it can be halted, corrected and resumed. These use factors such as slipping schedule, budget, and software quality. An evaluation team is selcted, goals and software team are refined, the plan is revised based on risk analysis, an early-warning system is instituted, and the eventual results are reviewed. A successful case study is given for a software project for a wireless telephony control and maintenance center. Amazon.com has a table of contents. Author site. Discusses metrics used in development data collection including defects, testing, integration and product performance. Problems are classified as minor, serious or critical. The problem list alarm uses criteria such as growth of problem list, rate of additions, length of serious problem list, objective estimates of time needed to correct critical and serious problems, and whether the list itself is maintained correctly. Example of early-warning system (EWS) refers to software life cycle management (SLIM). A common problem cited in post-project reviews was the lack of an independent test organization.
Posted by cadsmith on December 12, 2007
Software Requirement Patterns, Withall, 2007, 384pp, shows that requirements define the functions and capabilities that the system provides. These may be high-level or detailed. This book applies general types of them to commercial business software. These may be created in a traditional manner, or in an agile extreme or incremental process. Most requirements don’t have a neat pattern; this book may apply from 15% of them up to a self-contained business system covering about 65%. The analysis and format are discussed. A catalog of 8 types is itemized: fundamental, information, data entity, user function, performance, flexibility, access control and commercial, each with from 2 to 6 subtypes. Each pattern has a paragraph for content definitions, a field template, examples, any extra requirements, particulars and considerations for development and testing. For example, an inter-system interface requirement pattern would have contents such as interface name, interface ID, the system at each end, interface purpose, interface owner, standard defining the interface, and technology to be used for the interface. Extra requirements would be individual types of interaction, throughput, scalability, extendability, resilience and availability, traffic verification and recording, upgrading, security, documentation and third-party interface development. A method for generating new patterns is outlined. Also see table of contents on Amazon.com. Author site. Wiki page. Site has two chapters for download.
Posted by cadsmith on December 8, 2007
“Real-Time UML: Advances in the UML for Real-Time Systems”, Third Edition, Bruce Powel Douglass, 2004, 752pp is about UML 2.0 which adds a realtime profile to capture scheduling and performance constraints. UML is used as a standard for system design descriptions. In this detailed textbook, each chapter has exercises, and reference titles. It introduces the rapid object-oriented process for embedded systems (ROPES). Other contents include model-based projects and artifacts, structural and dynamic aspects of UML 2.0 and object-oriented software, realtime system requirements analysis, object definition, and architectural design. Mechanistic design patterns add design-level objects to solve some common problems, e.g. pointer dereferencing issues handled by a pointer object. An example of how to represent a complex system in UML is shown for the Command, Communications, Control, Computers, Intelligence and Reconnaissance Architecture Framework (C4ISR-AF) used in defense products.
There are several testing types described including unit, integration and validation. Use cases can be identified during several phases of software lifecycle including system-level during requirements analysis or validation testing, subsystem-level during systems engineering or integration testing, expansion during architectural design, and refinement during object analysis or mechanistic design.
Posted by cadsmith on December 5, 2007
“Use Case Modeling”, Bittner and Spence, 2002, 368pp, shows how a system will work in an understandable fashion. These are defined in addition to detailed, but less clear, requirements and features. They include procedures, actors and relationships and are often illustrated, e.g. by UML charts. Basic components include actors such as anything that cannot be controlled, use case interactions to achieve a goal, and descriptions of the event flow, exception and error conditions, and optional behaviors. A vision is created by finding out who the stakeholders are that are affected by the system outcome, selecting a representative and involving them in the project, then issuing a problem statement and description of the features, requirements and constraints. Detail is applied to finding actors and cases possibly in a workshop. The use case is reviewed and used in each phase of a software life cycle including requirements, development and testing. There are techniques that a team can use for these and to agree on system scope issues concerning customer priority, architectural significance, and operational capability. Areas of instability can be authored using supplementary specifications. Authors are skilled at synthesis, systematic approaches to problem-solving, a level of domain knowledge, some understanding of software development, and writing. Tips, traps and techniques are shown for definition, writing and review, such as storyboards of screenshots. The rational unified process is discussed under iterative development. Cases can be evaluated for completeness, coverage and traceability. Wiki page. There are tools for this available, e.g. free UML editor, requirements specification, or open-source verification. Also see UModel video.
Posted by cadsmith on December 1, 2007
Reviewing Rapid Web Applications with TurboGears: Using Python to Create Ajax-Powered Sites by Ramm and others, 2006, 504 pp. This TurboGears 1.0 tutorial covers the key components of the open-source package along with hints, resources and instructions for installation and application development. The tool is a model-view-controller (MVC) framework for procedural interfaces between users and databases. It combines python, AJAX, JSON and SQL into library components that can be more easily maintained and reused. CherryPy and MochiKit provide event control, Kid is for XHTML view templating, and SQLObject does database modelling. It also has libraries for scheduling, configuration, logging and pagination. The TurboGears Toolbox manages code components using applications such as a browser, graphical database designer, debugger, and internationalization converter. It is compatible with Windows, Linux, Mac OS X. Coding demonstrations include a simple bookmarking site and more complex project management utility which sets cookies and outpus RSS and Atom feeds. An appendix describes how SQLAlchemy implements SQL for legacy and high-performance databases.
Book site links to other sources.
Refers to nose unit test package.