Homework for you

Best Critical Thinking Software

Rating: 4.2/5.0 (19 Votes)

Category: Critical thinking

Description

The critical role of systems thinking in software development - O - Reilly Media

The critical role of systems thinking in software development

Anticipating complexity and unpredictability in your daily work.

September 1, 2016

Software applications exist to serve practical human needs, but they inevitably accumulate undefined and defective behaviors as well.

Because software flaws are often left undiscovered until some specific failure forces them to the surface, every software project ships with some degree of unquantified risk. This is true even when software is built by highly skilled developers, and is an essential characteristic of any complex system.

When you really think about it, a software system is little more than a formal mathematical model with incomplete, informally specified inputs, outputs, and side effects, run blindly by machines at incomprehensibly high speeds. And because of that, it’s no surprise that our field is a bit of a mess.

This chaotic environment becomes more comprehensible when you think of software not as rules rigidly defined in code, but as a living system with complex emergent behavior. Where programmers and people using an application see a ‘bug’, a systems theorist would see just another lever to pull that produces some sort of observable outcome. In order to develop a better mental model for the systems we build, we’ll need to learn how to think that way, too.

But instead of getting bogged down in theory, let’s work through a quick example of what complex emergent system behavior looks like in a typical web application.

A brief story on how small flaws can compound to create big problems

Suppose that you are maintaining a knowledge base application… a big collection of customer support articles with a content management system for employees to use. It’s nothing fancy, but it does its job well.

The knowledge base website is low-traffic, but it’s important to the company you work for. To quickly identify failures, it has a basic exception monitoring system set up which sends an email to the development team every time a system error happens.

This monitoring tool only took a few minutes to set up, and for years you haven’t even had to think about its presence except when alert emails are delivered—and that doesn’t happen often.

But one day, you arrive at work and find yourself in the middle of a minor emergency. Your inbox is stuffed with over 1300 email alerts that were kicked up by the exception reporting system in the wee hours of the morning. With timestamps all within minutes of each other, it is pretty clear that the this problem was caused by some sort of bot. You dig in to find out what went wrong.

The email alerts reveal that the bot’s behavior resembled that of a web crawler: it was attempting to visit every page on your site, by incrementing an id field. However, it had built the request URLs in a weird way; constructing a route that no human would ever think to come up with.

When the bot hit this route, the server should have responded with a 404 error to let it know that the page it requested didn’t exist. This probably would have convinced the bot to go away, but even if it hadn’t, it would at least prevent unhandled exceptions from being raised.

For nearly any invalid URL imaginable, this is the response the server would have provided. But the exact route that the bot was hitting just so happened to run some code that, due to a flawed implementation, raised an exception rather than failing gracefully.

The immediate solution to this problem is straightforward: Fix the defective code so that an exception is no longer raised, add a test that probably should have been there in the first place, then finally temporarily disable the exception reporter emails until you can find a new tool that will not flood your inbox in the event of a recurring failure.

If this were really your project, you’d probably take care of those two chores right away (treating it as the emergency it is), but then might be left wondering about the deeper implications of the failure, and how it might relate to other problems with your system that have not been discovered yet.

Failure is almost never obvious until you’re looking in the rearview mirror

If the scenario from the story above seemed oddly specific, it’s because I’ve dealt with it myself a few years ago. The context was slightly different, but the core problems were the same.

In retrospect, it’s easy to see the software flaw described above for what it was: an exposed lever that an anonymous visitor could pull to deliver an unlimited amount of exception report emails.

But at the time, the issue was baffling to me. In order for this problem to occur, a bot needed to trigger a very specific corner case by passing an invalid URL that could never be imagined in testing. The code handling this request would have handled this failure case properly when it was originally written, but at some point the query object we were using was wrapped in a decorator object. That object introduced a slight behavior change, which indirectly lead to exceptions being raised.

The behavior change would not be obvious on a quick review of the code; you’d need to read the (sparse) documentation of a third party library that was in theory meant to provide a fully backwards-compatible API. Some extra tests could have potentially caught this issue, but the need for such tests was only obvious in hindsight.

A real person using the website would never encounter this error. A developer manually testing the site would never encounter this error. Only a bot, doing bad things by pure coincidence, managed to trigger it. In doing that, it triggered a flood of emails, which in turn put a shared resource at risk.

I’m embarrassed to admit that the real scenario was a bit worse, too. The email delivery mechanism I was using for sending out exception reports was the same mechanism used for sending out emails to customers. Had the bot not just “given up” eventually, it would have likely caused a service interruption on that side of things as well.

This is the swamp upon which our castles are built. Hindsight is 20/20, but I’m sure you can come up a similarly painful story from your own work if you look back far enough in your career.

Accepting software development as an inherently hazardous line of work

I wish that I could give a more confident answer for how you can avoid these sorts of problems in your own work, but the truth is that I am still figuring out all of that myself.

One thing I’d like to see (both in my own work and in the general practices of software developers) is a broadened awareness of where the real boundaries are in the typical software application, and what can go wrong at the outer reaches of a system.

Code reviews are now a fairly common practice, and that is a good thing, but we need to go far beyond the code to effectively reason about the systems we build.

In particular, it’d help if we always kept a close eye on whatever shared resources are in use within a system: storage mechanisms, processing capacity, work queues, databases, external services, libraries, user interfaces, etc. These tools form a “hidden dependency web” below the level of our application code that can propagate side effects and failures between seemingly unrelated parts of a system, and so they deserve extra attention in reviews.

It’s also important to read and write about experiences with failures (and near-misses) so that we gain a shared sense of the risks involved in our work and how to mitigate them.

Many system-level problems are obvious in hindsight but invisible at the time that they’re introduced; especially when a particular failure requires many things to go wrong all at once for the negative effects to happen.

Finally, we are not the only field to deal with developing and operating complex systems, so we should also be looking at what we can learn from other disciplines.

Richard Cook’s excellent overview of How Complex Systems Fail is one example of ideas that originated in the medical field which apply equally well to software development, and I strongly recommend reading it as source of inspiration.

One last thought…

When software literally shipped on ships—destined to run on a particular set of known devices and solve a well-defined, static purpose—the programmer’s role was easier to define. Now that everything is connected to everything else, and the entire economy depends on the things we build, we have more work to do if we want to make our systems both safe and successful in the modern world.

Although it overwhelms me as much as anyone else, I’m up to the challenge of writing code for the year we’re living in. For now, that means dealing with extreme complexity and a lack of predictability at the boundary lines of our systems. The example I gave in this article is at the shallow end of that spectrum, but even it is not obvious without some careful practice.

If you haven’t already, I hope you’ll join me in going beyond raw coding skills, and begin studying and practicing systems thinking in your daily work.

Editor's note: Gregory Brown's book about the non-code aspects of software development, called "Programming Beyond Practices," will be published soon by O'Reilly. Follow its progress here .

Gregory Brown has run the independently published Practicing Ruby journal since 2010, and is the original author of the popular Prawn PDF generation library. In his consulting projects, Gregory has worked with key stakeholders in companies of all sizes to identify core business problems that can be solved with as little code as possible. Gregory's relentless focus on the 90% of programming work that isn't just writing code is what lead him to begin working on Programming Beyond Practices.

Other articles

Wiley-Blackwell share_ebook Teaching Critical Thinking in Psychology: A Handbook of Best Practices - Dana S


Teaching Critical Thinking in Psychology: A Handbook of Best Practices
Dana S. Dunn, Jane S. Halonen | Wiley-Blackwell | October, 2008 | 320 pages | English | pdf

Teaching Critical Thinking in Psychology features current scholarship on effectively teaching critical thinking skills at all levels of psychology. * Offers novel, nontraditional approaches to teaching critical thinking, including strategies, tactics, diversity issues, service learning, and the use of case studies * Provides new course delivery formats by which faculty can create online course materials to foster critical thinking within a diverse student audience * Places specific emphasis on how to both teach and assess critical thinking in the classroom, as well as issues of wider program assessment * Discusses ways to use critical thinking in courses ranging from introductory level to upper-level, including statistics and research methods courses, cognitive psychology, and capstone offering

**** No Mirrors below, please! Follow Rules! ****

Sponsored High Speed Downloads

9597 dl's @ 3875 KB/s

Teaching Critical Thinking in Psychology: A Handbook of Best Pra. [Full Version]

8338 dl's @ 2116 KB/s

Teaching Critical Thinking in Psychology: A Handbook of Best Pra. - Fast Download

8503 dl's @ 2671 KB/s

Teaching Critical Thinking in Psychology: A Handbook of Best Pra. - Direct Download

No active download links here?

Please check the description for download links if any or do a search to find alternative books.

  1. Ebooks list page. 15537
  2. 2012-01-21 Teaching Critical Thinking in Psychology. A Handbook of Best Practices - Dana S. Dunn. Jane S. Halonen
  3. 2012-01-05 Teaching Critical Thinking in Psychology. A Handbook of Best Practices - Dana S. Dunn. Jane S. Halonen
  4. 2011-12-30 Teaching Critical Thinking in Psychology. A Handbook of Best Practices - Dana S. Dunn. Jane S. Halonen
  5. 2011-12-22 Teaching Critical Thinking in Psychology. A Handbook of Best Practices - Dana S. Dunn. Jane S. Halonen
  6. 2011-12-08 Teaching Critical Thinking in Psychology. A Handbook of Best Practices - Dana S. Dunn. Jane S. Halonen
  7. 2011-11-18 Teaching Critical Thinking in Psychology. A Handbook of Best Practices free ebook download
  8. 2010-01-27 Teaching Critical Thinking in Psychology. A Handbook of Best Practices
  9. 2010-01-17 Teaching Critical Thinking in Psychology. A Handbook of Best Practices
  10. 2012-01-11 Med-Surg Success: A Course Review Applying Critical Thinking to Test Taking (Davis's Q&a Series)
  11. 2011-12-30 The Miniature Guide to Critical Thinking -Concepts and Tools
  12. 2011-11-02 Reason to Write: Applying Critical Thinking to Academic Writing
  13. 2011-09-09 The Miniature Guide to Critical Thinking -Concepts and Tools free ebook download
  14. 2017-02-17 [PDF] Teaching Critical Thinking. Practical Wisdom
  15. 2014-06-17 Stereotaxic Neurosurgery in Laboratory Rodent: Handbook on Best Practices
  16. 2013-09-03 Project Manager's Handbook. Applying Best Practices Across Global Industries (Repost) - Removed
  17. 2011-11-11 Roger L. Kemp, Carl J. Stephani, "Cities Going Green: A Handbook of Best Practices "
  18. 2011-11-08 Cities Going Green A Handbook of Best Practices
  19. 2011-11-05 Cities Going Green: A Handbook of Best Practices
  20. 2010-05-01 Manufacturing Handbook of Best Practices. An Innovation, Productivity, and Quality Fo - Removed

No comments for "[share_ebook] Teaching Critical Thinking in Psychology: A Handbook of Best Practices - Dana S. Dunn, Jane S. Halonen" .

Add Your Comments

  1. Download links and password may be in the description section. read description carefully!
  2. Do a search to find mirrors if no download links or dead links.

Reading and Critical Thinking - Level 1Reading Comprehension SoftwareBeginning level

Reading and Critical Thinking - Level 1 (Early Readers)
Beginning Reading Comprehension Software
Product Description


[Click on images to enlarge.]

Reading Comprehension Software / Critical Thinking Skills Software - Level 1 - Elementary readers will enjoy the fun and challenge - improve early reading skills, reading comprehension and critical thinking skills. This is a critical thinking skills software program that will be good for young students as well as adults who are learning English and the nuances of the English language.

Basic reading skills are practiced with twenty critical thinking topics accessed from easy to use clickable menus. Each screen is scored and students always know if they are correct or not. Scores are automatically calculated and stored.

Reading comprehension and critical thinking skills are taught through the following topics: Groups
Real or Not Real
Fact or Opinion
Define the Examples
What are the details?
Compare and Contrast
Parts of the Whole
Choose the Word
Organize the Steps
Compare words
Same or Opposite
What will happen?
Complete the sentence
Word meanings
What is the main idea?
What is missing?
Abstract or Concrete
Logic
Parts of the Story
True or False Scroll down to view sample screens.
Click on each screen to enlarge for better viewing. Each section of this reading comprehension software and critical thinking skills software begins easy and gets progressively more difficult. Students will develop basic reading skills,reading comprehension and problem solving skills. Reading and thinking prepares students for testing. Critical thinking skills are necessary for effective reading.

Although this reading comprehension software is appropriate for young elementary readers, it is also very appropriate for older students who need to strengthen their reading skills. Colleges are using this reading comprehension software CD for students who are learning to speak English as a second language.

This software motivates the student to improve early reading skills and develop critical thinking skills while providing teachers with measurable results. This first program in the "Reading and Critical Thinking - Level 1" series is $49. This product is available for for Windows XP and Vista, or Macintosh OSX.

Accessories

Critical thinking for software engineers - Intelligent Security

Critical thinking for software engineers

I am sometimes asked whether doing a PhD was worth it, given that I left academia and research to become a full-time software developer. My answer is an unequivocal “yes”, despite the fact that my thesis is about as relevant to what I do now as a book on the sex lives of giraffes.

By far the most important skill I learnt during that time was not any particular technical knowledge, but rather a general approach to critical thinking —how to evaluate evidence and make rational choices. In a profession such as software engineering, where we are constantly bombarded with new technologies, products and architectural styles, it is absolutely essential to be able to step back and evaluate the pros and cons to form sensible technology choices. In this post I’ll try and summarise the approach I take to making these decisions.

Critical thinking has a long history, with modern Western critique having its roots in the Enlightenment. It is hard to summarise this long tradition of thought, but the basic theme is one of moving away from accepting arguments on authority or dogma, and instead placing emphasis on reasoning and evidence.

  • Deduction applies general rules to known facts to derive new facts that follow. In other words, given a rule IF a THEN b and known fact a then we can derive b. This is the primary form of reasoning in logic and mathematics.
  • Induction attempts to derive general rules from observations and known facts. That is, given observations that b seems to always follow a. then infer the rule IF a THEN b. This is a form of reasoning most closely associated with science.
  • Abduction tries to explain observations by reference to general rules and known facts. That is, given an observation of b and knowledge of a general rule IF a THEN b then we can posit that a may also be true. This kind of reasoning is associated with diagnostics and explanation.

Of these three forms, only deduction is usually sound . That is given, true initial facts and sound rules of inference, then the derived facts are guaranteed to also be true. The same is not true of induction or abduction: there may be other rules that provide a better explanation of observations, and there may be many possible causes that could explain an observation.

Unfortunately, much of the reasoning we must do as software engineers is not deductive. Given a number of similar problems and the success or failure of their solutions, we might induce new general design patterns or architectures. Given a number of apparently successful projects that all used a particular product, a vendor may like us to reach the (abductive) conclusion that the product was at least partially responsible for their success. So how do we evaluate these kinds of reasoning if we cannot hope to directly prove them? The answer is to try to gather as much evidence as possible both for and against and to weigh up the pros and cons:

An evidence evaluation cycle
  • Adopt a skeptical approach and try to find flaws or mistakes in the reasoning. If you’ve ever presented a paper at an academic conference, you will be well aware of this technique! While it may initially appear as mean-spirited to try to pick holes in other peoples’ work, it serves an absolutely critical purpose. While we may not be able to prove positively that an idea is correct, we can disprove it by finding counterexamples or other flaws. Once we have tried our (collective) hardest to disprove an idea and failed then we begin to have confidence in its validity.
  • Try to find as much evidence for or against an idea as possible, from as wide a number of sources as possible. If we cannot directly prove or disprove an idea, then it will come down to a balance of evidence, and the more we have the better.
  • Evaluate the source of evidence and any bias that might be present. For instance, a vendor clearly has an incentive to promote successful uses of their product while downplaying unsuccessful ones. Likewise, a consultancy company has an incentive to promote methodologies and architectures that might drive more use of their services, such as those that are complicated or new (and therefore need most advice).
  • What assumptions are being made? Do those assumptions hold for the cases you are considering? For example, an architecture proposed by Google may need to handle hundreds of millions of users and very high load rates. To deal with these high loads they may be willing to accept much higher up-front development costs than may be necessary for a much smaller workload. Do you really need to deploy that big data cluster when all your data would fit into RAM on a single machine?
  • Is the proposed solution at an appropriate level of abstraction or generality? While a general solution may seem appealing, if you only need to solve a one-off special case then maybe there are simpler alternatives.
  • Discuss the idea with colleagues and friends from as wide a pool as possible. It is very hard for a single individual to shake off their own prejudices and come to a completely dispassionate evaluation of an idea. Only through a process of informed debate can an idea be fully explored.

In the spirit of following my own advice, I would love to hear your thoughts on this article. What have I missed or overlooked? Am I right to emphasise critical thinking for software engineering, or do you think technical skills are more important? I hope this article has got you thinking about how you evaluate the ideas and techniques you encounter every day in your software engineering careers.

Which languages are used for safety-critical software? Stack Overflow

Ada and SPARK (which is an Ada dialect with some hooks for static verification) are used in aerospace circles for building high reliability software such as avionics systems. There is something of an ecosystem of code verification tooling for these languages. although this technology also exists for more mainstream languages as well .

Erlang was designed from the ground up for writing high-reliability telecommunications code. It is designed to facilitate separation of concerns for error recovery (i.e. the subsystem generating the error is different from the subsystem that handles the error). It can also be subjected to formal proofs although this capability hasn't really moved far out of research circles.

Functional languages such as Haskell can be subjected to formal proofs by automated systems due to the declarative nature of the language. This allows code with side effects to be contained in monadic functions. For a formal correctness proof the rest of the code can be assumed to do nothing but what is specified.

However, these languages are garbage collected and the garbage collection is transparent to the code, so it cannot be reasoned about in this manner. Garbage collected languages are not normally predictable enough for hard realtime applications, although there is a body of ongoing research in time bounded incremental garbage collectors.

Eiffel and its descendants have built-in support for a technique called Design By Contract which provides a robust runtime mechanism for incorporating pre- and post- checks for invariants. While Eiffel never really caught on, developing high-reliability software tends to consist of writing checks and handlers for failure modes up-front before actually writing the functionality.

Although C and C++ were not specifically designed for this type of application, they are widely used for embedded and safety-critical software for several reasons. The main properties of note are control over memory management (which allows you to avoid having to garbage collect, for example), simple, well debugged core run-time libraries and mature tool support. A lot of the embedded development tool chains in use today were first developed in the 1980s and 1990s when this was current technology and come from the Unix culture that was prevalent at that time, so these tools remain popular for this sort of work.

While manual memory management code must be carefully checked to avoid errors, it allows a degree of control over application response times that is not available with languages that depend on garbage collection. The core run time libraries of C and C++ languages are relatively simple, mature and well understood, so they are amongst the most stable platforms available. Most if not all of the static analysis tools used for Ada also support C and C++, and there are many other such tools available for C. There are also several widely used C/C++ based tool chains ; most tool chains used for Ada also come in versions that support C and/or C++.

Formal Methods such as Axiomatic Semantics (PDF), Z Notation or Communicating Sequential Processes allow program logic to be mathematically verified, and are often used in the design of safety critical software where the application is simple enough to apply them (typically embedded control systems). For example, one of my former lecturers did a formal correctness proof of a signaling system for the German railway network.

The main shortcoming of formal methods is their tendency to grow exponentially in complexity with respect to the underlying system being proved. This means that there is significant risk of errors in the proof, so they are practically limited to fairly simple applications. Formal methods are quite widely used for verifying hardware correctness as hardware bugs are very expensive to fix, particularly on mass-market products. Since the Pentium FDIV bug. formal methods have gained quite a lot of attention, and have been used to verify the correctness of the FPU on all Intel processors since the Pentium Pro.

Many other languages have been used to develop highly reliable software. A lot of research has been done on the subject. One can reasonably argue that methodology is more important than the platform although there are principles like simplicity and selection and control of dependencies that might preclude the use of certain platforms.

As various of the others have noted, certain O/S platforms have features to promote reliability and predictable behaviour, such as watchdog timers and guaranteed interrupt response times. Simplicity is also a strong driver of reliability, and many RT systems are deliberately kept very simple and compact. QNX (the only such O/S that I am familiar with, having once worked with a concrete batching system based on it) is very small, and will fit on a single floppy. For similar reasons, the people who make OpenBSD - which is known for its robust security and thorough code auditing - also go out of their way keep the system simple.

EDIT: This posting has some links to good articles about safety critical software, in particular Here and Here. Props to S.Lott and Adam Davis for the source. The story of the THERAC-25 is a bit of a classic work in this field.

Careful. When I worked in aerospace, Ada was used just because it was aerospace, not because of any specific Ada feature. You can write highly reliable software in any language. Aerospace software is reliable because of their rather extreme spec/coding processes, not because Ada is magic pixie dust. – Ken Feb 20 '09 at 0:12

One of Ada's strengths actually is that it actively supports the mindset and methodologies required to develop safety-critical software, of course you could program safety-critical software in any programming language (heck, even in BASIC or assembly), but Ada was specifically designed and developed for this purpose. And the SPARK extension even more so. – none Jun 9 '09 at 23:48

"It is a functional language, which means that code has no side effects". That term is generally used to mean that a programming language has first-class lexical closures. In fact, Erlang relies heavily upon side-effects for all IO including message passing. – Jon Harrop May 8 '12 at 15:32

Firstly, safety critical software adheres to the same principals that you would see in the classic mechanical and electrical engineering fields. Redundancy, fault tolerance and fail-safety.

As an aside, and as the previous poster alluded to (and was for some reason down-voted), the single most important factor in being able to achieve this is for your team to have a rock solid understanding of everything that is going on. It goes without saying that good software design on your part is key. But it also implies a language that is accessible, mature, well supported, for which there is a lot of communal knowledge and experienced developers available.

Many posters have already commented that the OS is a key element in this respect which is very true most because it must be deterministic (see QNX or VxWorks). This rules out most interpreted languages that do things behind the scenes for you.

ADA is a possibility but there is less tools and support out there, and more importantly the stellar people aren't as readily available.

C++ is a possibility but only if you strictly enforce a subset. In this respect it is devil's tool, promising to make our life easier, but often doing too much,

C is ideal. It is very mature, fast, has a diverse set of tools and support, many experienced developers out there, cross-platform, and extremely flexible, can work close to the hardware.

I've developed in everything from smalltalk to ruby and appreciate and enjoy everything that higher languages have to offer. But when I'm doing critical systems development I bite the bullet and stick with C. In my experience (defence and many class II and III medical devices) less is more.

answered Oct 30 '08 at 21:07

"ADA is a possibility but there is less tools and support out there, and more importantly the stellar people aren't as readily available." Ada (never ADA - it isn't an acronym) does require some of those extra tools, as the language offers what the tools offer by default. Ada would be en excellent choice and is very easy to learn. Google "Ada McCormick train modeling". He has since updates his results, I believe, showing that Ada is still a very easy to use language. – YermoungDer May 28 '09 at 8:45

I'd pick up haskell if it's safety over everything else. I propose haskell because it has very rigid static type checking and it promotes programming where you build parts in a such manner that they are very easy to test.

But then I wouldn't care about language much. You can get much greater safety without compromising as much by having your project overall in condition and working without deadlines. Overall as in having all the basic project management in place. I'd perhaps concentrate on extensive testing to ensure everything works like it ought, tests that cover all the corner cases + more.

answered Oct 28 '08 at 14:07

The language and OS is important, but so is the design. Try to keep it bare-bones, drop-dead simple.

I would start by having the bare minimum of state information (run-time data), to minimize the chance of it getting inconsistent. Then, if you want to have redundancy for the purpose of fault-tolerance, make sure you have foolproof ways to recover from the data getting inconsistent. Redundancy without a way to recover from inconsistency is just asking for trouble.

Always have a fallback for when requested actions don't complete in a reasonable time. As they say in air traffic control, an unacknowledged clearance is no clearance.

Don't be afraid of polling methods. They are simple and reliable, even if they may waste a few cycles. Shy away from processing that relies solely on events or notifications, because they can be easily dropped or duplicated or misordered. As an adjunct to polling, they are fine.

A friend of mine on the APOLLO project once remarked that he knew they were getting serious when they decided to rely on polling, rather than events, even though the computer was horrendously slow.

P.S. I just read through the C++ Air Vehicle standards. They are OK, but they seem to assume that there will be lots of classes, data, pointers, and dynamic memory allocation. That is exactly what there should no more of than absolutely necessary. There should be a data structure czar with a big scythe.

answered Nov 28 '08 at 14:16

The OS is more important then the language. Use a real time kernel such as VxWorks or QNX. We looked at both for controlling industrial robots and decided to go with VxWorks. We use C for the actual robot control.

For truly critical software, such as aircraft autoland systems, you want multiple processors running independently to cross check results.

answered Oct 28 '08 at 14:19

Real-time environments usually have "safety-critical" requirements. For that sort of thing, you could look at VxWorks. a popular real-time operating system. It's currently in use in many diverse arenas such as Boeing aircraft, BMW iDrive internals, RAID controllers, and various space craft. (Check it out .)

Development for the VxWorks platform can be done with several tools, among them Eclipse. Workbench. SCORE. and others. C, C++, Ada, and Fortran (yes, Fortran) are supported, as well as some others.

answered Oct 28 '08 at 14:27

Since you don't give a platform, I would have to say C/C++. On most real-time platforms, you're relatively limited in options anyway.

The drawbacks of C's tendency to let you shoot yourself in the foot is offset by the number of tools to validate the code and the stability and direct mapping of the code to the hardware capabilities of the platform. Also, for anything critical, you will be unable to rely on third-party software which has not been extensively reviewed - this include most libraries - even many of those provided by hardware vendors.

Since everything will be your responsibility, you want a stable compiler, predictable behavior and a close mapping to the hardware.

answered Oct 28 '08 at 13:53

Actually, there are several languages that offer "better" (for safety) constructs that pure C/C++ but still compile into native code. I believe Eiffel compiles into native code, and I would use that over C/C++ in a safety-critical system. – Thomas Owens Oct 28 '08 at 13:55

Eiffel's advantages there would be offset in my mind by the advantages of the wider base of experienced C engineers and more mature compilers. – Cade Roux Oct 28 '08 at 13:58

Here's a few updates for some tools that I had not seen discussed yet that I've been playing with lately which are fairly good.

The LLVM Compiler Infrastructure. a short blurb on their main page (includes front-ends for C and C++. Front-ends for Java, Scheme, and other languages are in development);

A compiler infrastructure - LLVM is also a collection of source code that implements the language and compilation strategy. The primary components of the LLVM infrastructure are a GCC-based C & C++ front-end, a link-time optimization framework with a growing set of global and interprocedural analyses and transformations, static back-ends for the X86, X86-64, PowerPC 32/64, ARM, Thumb, IA-64, Alpha, SPARC, MIPS and CellSPU architectures, a back-end which emits portable C code, and a Just-In-Time compiler for X86, X86-64, PowerPC 32/64 processors, and an emitter for MSIL.

VCC is a tool that proves correctness of annotated concurrent C programs or finds problems in them. VCC extends C with design by contract features, like pre- and postcondition as well as type invariants. Annotated programs are translated to logical formulas using the Boogie tool, which passes them to an automated SMT solver Z3 to check their validity.

Both of these tools, LLVM or VCC are designed to support multiple languages and architectures, I do think that their is a rise in coding by contract and other formal verification practices.

WPF (not the MS framework :), is a good place to start if you're trying to evaluate some of the recent research and tools in the program validation space.

WG23 is the primary resource however for fairly current and specific critical systems development language details. They cover everything from Ada, C, C++, Java, C#, Scripting, etc. and have at the very least a decent set of reference and guidance for direction to update information on language specific flaws and vulnerabilities.

A language that imposes careful patterns may help, but you can impose careful patterns using any language, even assembler. Every assumption about every value needs code that tests the assumption. For example, always test divisor for zero before dividing.

The more you can trust reusable components, the easier the task, but reusable components are seldom certified for critical use and will not get you through regulatory safety processes. You should use a tiny OS kernel and then build tiny modules that are unit tested with random input. A language like Eiffel might help, but there is no silver bullet.

answered Oct 28 '08 at 16:01

I agree, but I'm upvoting because of the random-data-testing statement. – Mike Dunlavey Nov 28 '08 at 16:19

There are a lot of good references at http://www.dwheeler.com ("high-assurance software").

For automotive stuff, see the MISRA C standard. C but you can't use more than two levels of pointers, and some other stuff like that.

adahome.com has good info on Ada. I liked this C++ to Ada tutorial: http://adahome.com/Ammo/cpp2ada.html

For hard real-time, Tom Hawkins has done some interesting Haskell stuff. See: ImProve (language incorporates an SMT solver to check verification conditions) and Atom (EDSL for hard realtime concurrent programming without using actual threads or tasks).

Any software product can pass the DO-178b certification process using any tool but the questions is how difficult would it be. If the compiler isn't certified you may need to demonstrate your code is traceable at the assembly level. So it is helpful that your compiler is certified. We used C on our projects but had to verify at the assembly level and use a code standard that included turning off the optimizer, limited stack usage, limited interrupt usage, transparent certifiable libraries, etc. ADA isn't pixie dust but it makes the PSAC plan look more achievable.

As applicatons get larger, assembly code becomes less viable choice. The ARM processor just invites C++, but if you ask companies like Kiel it their tool is certified, they will return with a "huh?" And don't forget that verificaton tools also need to be certified. Try verifying a LabView test program.

answered Oct 11 '12 at 21:39

I don't know what language I'd use, but I do know what language I wouldn't:

NOTE ON JAVA SUPPORT. THE SOFTWARE PRODUCT MAY CONTAIN SUPPORT FOR PROGRAMS WRITTEN IN JAVA. JAVA TECHNOLOGY IS NOT FAULT TOLERANT AND IS NOT DESIGNED, MANUFACTURED, OR INTENDED FOR USE OR RESALE AS ON-LINE CONTROL EQUIPMENT IN HAZARDOUS ENVIRONMENTS REQUIRING FAIL-SAFE PERFORMANCE, SUCH AS IN THE OPERATION OF NUCLEAR FACILITIES, AIRCRAFT NAVIGATION OR COMMUNICATION SYSTEMS, AIR TRAFFIC CONTROL, DIRECT LIFE SUPPORT MACHINES, OR WEAPONS SYSTEMS, IN WHICH THE FAILURE OF JAVA TECHNOLOGY COULD LEAD DIRECTLY TO DEATH, PERSONAL INJURY, OR SEVERE PHYSICAL OR ENVIRONMENTAL DAMAGE.

answered Oct 28 '08 at 13:53

The Safety-Critical Java (SCJ) is based on a subset of RTSJ. The goal is to have a framework suitable for the development and analysis of safety critical programs for safety critical certification (DO-178B, Level A and other safety-critical standards).

SCJ for example removes the heap, which is still present in RTSJ, it also defines 3 compliance levels to which both application and VM implementation may conform, the compliance levels are defined to ease certification of variously complex applications.

Java is a nighmare language for so many reasons. It was designed by an idiot who misunderstood the Pascal and Oberon projects of Prof. Wirth.

ADA was a language designed by a large commmittee, and the resulting sprawl reminds me so much of PL/1 which was wonderful, but so complicated to write a compiler that nobody picked it up.

Modula-2 is probably the simplest language ever devised, and instead of C, i have used modula-2 with a code size half the lines (and therefore runs twice as fast). C is just one step above assembler, and just by breathing too hard you can create a nasty bug.

Pascal and basic are very reliable languages. In fact, the Visual Basic 6 compiler/toolkit is probably the best thing MS ever produced, and people still use it 15 years after it was abandoned by MS. DOn't get me started on the abomination that is .NET, the most horrendously complicated steaming pile of crap to come out of MS, which wanted to create a proprietary system that nobody could ever clone. Too bad nobody wants to clone it! they succeed too well in making something obscure.

Eiffel is intrinsically reliable, because it uses tiny sub-processes with their own stacks and heaps that get collected when the sub-task ends, so you don't fragment memory. But good luck understanding Eiffel, it was the work of a madman. The same goes for Miranda, and so many of the academic languages which were designed by math freaks instead of people who are used to accomplishing something practical.

I would say that Python is one of the best languages. Easy to read, fast, and simple. It it not particularly safe, but for scripting, it beats the pants off Bash shell scripts, or heaven forbid, the atrocious write-only language called Perl.

answered Nov 25 '14 at 8:13