1. NASA Planetary Lake Lander Project at Google Tech Talk

    Wed 2013-06-05 16:02

    Nathalie Cabrol gave a Google Tech Talk about our Planetary Lake Lander Project project on Tuesday.

    Titan is the only body in the solar system other than Earth that's known to have lakes on its surface--only in Titan's case it's not water but liquid methane and ethane at super-cold temperatures! NASA is studying the possibility of landing a floating probe in one of these lakes and our project uses a similar probe on Earth to learn how we will operate the one on Titan.

    Nathalie, our principal investigator, is a distinguished planetary scientist, but also an amazing ball of energy who lights up everyone she gets near. This was a fun talk.

    I'm leading the robotics part of the project. In past years we put the probe together, and now we're developing the adaptive science component that will allow the probe's onboard software to understand what its science sensors are measuring and react appropriately, like increasing the sampling rate during a storm. This is the most exciting phase of the project and hopefully we'll be able to build some great new tools to help the science team get better data.

  2. Joshua Tree Bloom in Mojave National Preserve

    Mon 2013-04-22 07:48

    Sabrina and I recently visited Mojave National Preserve to celebrate our wedding anniversary. It was a quick weekend getaway and the timing was right to see this year's record bloom of Joshua trees. I thought I would share a few pictures. Click any photo to see a larger version.

    |filename|images/2013_04_14_mojave/sabrina_joshua_bloom_600.jpg

    Sabrina enjoying a Joshua tree bloom

    |filename|images/2013_04_14_mojave/lava_flow_aerial_600.jpg

    Cima Volcanic Field cinder cones and lava flow from the air

    In February 2011, I was really lucky and happened to overfly Cima Volcanic Field while on a commercial flight from Chicago to L.A. It's a set of 40 cinder cones and associated lava flows 7.6 million years old, a part of the Basin and Range geological province in the American Southwest.

    I took some photos from the plane and used Google Earth to figure out what I had been looking at when I got home. On this trip, I got to visit some of the spots in my photo, and it was fun to make the connection.

    |filename|images/2013_04_14_mojave/lava_flow_600.jpg

    Cima Volcanic Field, Joshua tree and creosote bushes in foreground

    |filename|images/2013_04_14_mojave/lava_flow_link_600.jpg

    Linking the aerial and ground photos of the volcanic field

    The ground-level photo was taken from a wash that runs along the edge of the lava flow. I marked the point where I took it from in the aerial photo with a camera icon and a yellow shape showing the approximate field of view.

    The escarpment (cliff) running across the ground photo is the edge of the lava flow, still looking steep and fresh after millions of years. You can see two cinder cones marked A and B that appear in both photos (the shape of cinder cone A is hard to make out in the aerial photo due to the shadow of a passing cloud).

    |filename|images/2013_04_14_mojave/beavertail_cactus_beetles_600.jpg

    Beavertail cactus bloom full of beetles

    We saw several beavertail cactus blooms that were swarming with little beetles. Maybe mating swarms of blister beetles?

    |filename|images/2013_04_14_mojave/solar_power_600.jpg

    Ivanpah Solar Project solar thermal energy tower and heliostats

    Between our hotel in Primm, Nevada and the National Preserve, we drove by some really crazy-looking architecture in the desert that looked kind of like air traffic control towers without an airport. Eventually I realized it must be a solar power plant. Turns out it was the Ivanpah Solar Project, the largest solar thermal project under construction in the world.

    In the photo you can actually see the beams of light from the heliostats illuminating dust in the air and coming together in a focus like the Death Star's planet-killing raygun. Interesting that they were mostly not focused on the thermal energy tower, which seems not to be operational yet. Anyway, pretty cool!

    If you like these photos, you can also view my trip album.

  3. Ad Hoc Wireless Networking for Science in the Field

    Mon 2013-04-08 14:55

    Kurt recently posted some ideas for improving TrailScribe. In this post I want to discuss his first main idea: using an ad hoc wireless network to transmit data between TrailScribe units in the field.

    I think this is a great idea, so let's dig in and think about how you could do that and how you would decide whether it's practical for a given field deployment.

    In current practice, a typical base camp in a remote area off the cell network might have a BGAN satellite antenna (quick setup, ~ $10/MB Internet connection for web browsing and email) and a single WiFi access point that provides a data network within a few hundred feet of camp. They'll also be using walkie-talkies to stay in touch during the day.

    |filename|images/2013_04_08_bcamp_600.jpg

    Typical base camp network

    Suppose we'd like to extend the data network to cover the whole field operation so people can get updates on their TrailScribes. Partly they want to share data with each other and partly they want to get updates from the outside world through the satellite connection (for example, check the latest weather forecast during the day).

    Ideally, any changes we propose should piggyback on the time and money our users have already invested in their equipment, and not create more work for them in the field.

    Given that context, let's explore a few natural approaches to extending the data network outside camp.

    WiFi Ad Hoc Networking

    The base camp network uses WiFi and our users are familiar with it. Why not use it for the extended network? This is a pretty good approach.

    Companies like Tropos provide off-the-shelf WiFi ad hoc networking gear. If you get multiple WiFi repeaters you can spread them out in whatever pattern you like and clients at any point in the network can send data to each other. The repeaters detect link quality and configure multi-hop routes automatically.

    |filename|images/2013_04_08_wifi_600.jpg

    User 1 can send data to user 2 with multiple hops

    There are two main challenges here. The first is that even off-the-shelf ad hoc networking equipment requires a reasonably experienced network admin in the field, and we find that physical setup (transceiver + battery + tripod mount) and debugging network problems can take a lot of time.

    The second is coverage. Because WiFi equipment operates in unlicensed spectrum, the U.S. FCC and similar agencies elsewhere put fairly strict limits on how much power you can use, to avoid interference with other users. The maximum range is pretty short. If you need to cover a lot of ground you need a lot of repeaters, and the problem gets worse if there's interesting terrain in the way because repeaters need line of sight to talk.

    Another thing to think about is the last-mile issue. Modern tablets have built-in WiFi, but with a low-power transceiver and low-gain antenna that reduces their range. To reduce the number of fixed repeaters you need, users may end up carrying a backpackable repeater with a bit more range.

    VHF Hacking

    This is an area I know less about, and I would love to hear more from experts.

    VHF radios operate at lower frequencies than WiFi and typically use parts of the spectrum where the team needs to get a license but then is allowed to use more power. VHF comm links sometimes work in near-line-of-sight conditions where one of the radios is out of sight of the other one (slightly over the horizon). What all this boils down to is VHF usually provides less bandwidth but you can cover more ground with a single repeater.

    Many field teams use VHF walkie-talkies, and they may already be planning to cover the operational area with VHF repeaters to carry voice traffic. Can we piggyback small amounts of data on this network? What problems do we need to think about?

    First, we need to check that data transmission is not prohibited on the frequency the team is using. That will depend on the conditions in the license and I'm not sure how common a problem it is.

    Next, we need to figure out how to connect a tablet to the VHF network in the field. In an ideal world, we could pair the tablet and the user's walkie-talkie with a wireless connection (like WiFi or bluetooth). That would allow users to use both the tablet and the walkie-talkie with no encumbering wires and no extra radio equipment. I'm not currently aware of any walkie-talkies that can do that, but that could change any day now. (Tell Mark it would be a good weekend project.)

    A compromise approach I think might work ok is putting together a small backpackable unit that includes a IOIO board with a bluetooth dongle to pair wirelessly with the tablet, and a TNC that connects it to the audio port of a dedicated VHF transceiver. The user would still need to carry a separate VHF handheld, but no need for wires running to the tablet.

    |filename|images/2013_04_08_vhf_600.jpg

    Small backpackable unit for bridging a tablet to a VHF network

    Beyond the pairing problem, data going over the VHF network is going to be pretty slow and may need to share the channel with voice traffic, so we're probably in the realm of passing short messages rather than viewing media-rich web pages, and our applications need to be designed accordingly.

    How I Learned To Stop Worrying and Love Dedicated Hardware

    Coming from a computer science background, I find the idea of dedicated comms hardware (like voice-only radios) really painful. It seems blindingly obvious that we should have a general-purpose computer attached to the radio, it should be able to send arbitrary bits down the pipe, and applications like voice comms should be managed by software you can change.

    If you're trying to do novel research by writing brand-new apps, figuring out general-purpose data comms is important. But if you just want to get the job done, dedicated hardware still wins pretty often at the moment.

    Here's one example: If all you really need is position tracking, you can use a dedicated unit like the Garmin Rino. Rinos are combined GPS / walkie-talkies that exchange position updates so you can see where your team-mates are. We've had great luck with these on the Planetary Lake Lander project, and we've started looking into ways to better integrate them with other data, like using a base station to log position updates.

    Rinos have all kinds of problems--we can't change how they work, they can't send position updates over a repeater, and their tiny map screens leave a lot to be desired. But they can replace a GPS and a walkie-talkie, they're cheap, small, and rugged, and they don't need expert admins. It will take a lot of work before we can match that kind of ease of use with a tablet-based solution.

  4. Digital Field Assistant: The Context Around TrailScribe

    Sun 2013-04-07 18:00

    Yesterday Kurt posted some ideas for improving TrailScribe. I really like the ideas and I may talk more about them in future posts.

    They also reminded me of similar themes from my "Digital Field Assistant" concept that TrailScribe grew out of, and I realized I never shared my write-ups on that from back in 2011.

    So to complement the discussion... here's an old presentation that basically frames the core (TrailScribe) mobile device as part of a broader "field data system" that includes helmet cams, specialized sensors like Bluetooth weather gauges, VHF handheld radios, and an information sharing server in base camp.

    Click the gear and "Open speaker notes" to get the full content.

    You can think of the TrailScribe concept video as a down-sized version of this initial vision, focusing on the low-hanging fruit: What can you do if all you can afford is some tablet computers and cheap accessories like bluetooth headsets and ruggedized cases?

  5. Improving Python Development in Emacs with YASnippet

    Sun 2013-03-31 10:31

    Kurt Schwehr just showed me how he uses the YASnippet template system to improve his productivity in Emacs. This seems like a really great tool.

    Here's an example. With YASnippet installed, if you're editing a Python file in Emacs and you type the word class then hit [TAB], it will expand to the skeleton of a Python class definition:

    class ClassName(object):
        """
        """
    
        def __init__(self, ):
            """
            """
    

    That's not all. After expansion, your cursor is left sitting on ClassName so you can immediately edit the class name to what you want, and each time you hit [TAB] after that, it will advance the cursor to logical edit points, skipping over the boilerplate pieces you wouldn't want to change. Some snippets also call out to other Emacs functions to do things like auto-fill a current timestamp.

    YASnippet is not just for Python. You can define mode-specific snippets for any mode. YASnippet ships with a few pre-defined snippets for several modes, including the class snippet for Python mode that we just used.

    If you're interested in using YASnippet, here's the basic process for setting it up:

    • Install YASnippet
    • Try out its pre-defined snippets in your favorite Emacs mode
    • Look around the web for additional snippet collections specific to your needs
    • Write your own snippets

    Let's go through the specifics of setting up YASnippet for Python/Django development. First grab the library.

    cd ~/.emacs.d/plugins
    git clone https://github.com/capitaomorte/yasnippet.git
    

    Then add this to your ~/.emacs init file.

    (add-to-list 'load-path
                  "~/.emacs.d/plugins/yasnippet")
    (require 'yasnippet)
    (setq yas-snippet-dirs
          '("~/.emacs.d/plugins/yasnippet/snippets"))
    (yas-global-mode 1)
    

    (Note: You may want to compare this with the latest version of the YASnippet README in case the init file setup directions have changed. I had trouble at first due to version skew with old init file examples.)

    At this point you can fire up Emacs and try out the class tab-completion example above. Now let's add an additional snippet collection for Python/Django development.

    cd ~/lib/elisp
    git clone https://github.com/gabrielelanaro/emacs-for-python.git
    

    The emacs-for-python package is an enormous kitchen sink of every possible Python/Emacs tool you could think of. I've barely scratched the surface of what's in there. But for now we're just interested in its snippet collection. To tell YASnippet how to find the new collection, edit your init file:

    (setq yas-snippet-dirs
          '("~/lib/elisp/emacs-for-python/snippets/django"
            "~/.emacs.d/plugins/yasnippet/snippets")
    

    This gives us a bunch of snippets for Django, like model in Python mode and block in HTML mode (for editing Django templates).

    Finally, let's set up a trivial snippet of our own just to see how it's done. We'll keep our own personal snippet collection in the suggested location ~/.emacs.d/snippets. Edit your init file:

    (setq yas-snippet-dirs
          '("~/.emacs.d/snippets"
            "~/lib/elisp/emacs-for-python/snippets/django"
            "~/.emacs.d/plugins/yasnippet/snippets")
    

    Notice our personal snippets area is first in the list. Order is significant--this way you can override a built-in snippet by defining a personal snippet with the same name.

    Our trivial example snippet will expand foo into foobar in Python mode. To set it up, create the file ~/.emacs.d/snippets/python-mode/foo with these contents:

    # -*- mode: snippet -*-
    # name: foo
    # key: foo
    # --
    foobar
    

    To test it out, restart Emacs, edit a Python file, type foo and hit [TAB]. You should be good to go. If you'd like to make more interesting snippets, you can model them on examples from one of the existing snippet collections. Have fun!

  6. Best Talks at PyCon US 2013

    Sun 2013-03-24 16:00

    I didn't make it to PyCon this year but I'm really enjoying the talk videos posted at PyVideo. If you're interested in Python, these are very worth watching. The production quality is good and they are under 30 minutes each. Here are some of my favorites.

    [Alex Martelli: "Good enough" is good enough!] For me, watching this talk is like going to a revival meeting, getting fired up all over again, then having this uncomfortable realization that you have a lot to live up to.

    [Daniel Lindsley: How (Not) To Build An OSS Community.] This is another talk I find humbling. Our research group at NASA builds a lot of open source software that works well for the needs of our clients, but we have never had a lot of success engaging a wider user base or outside developers. Daniel does a great job of laying out what we need to do, but are not doing.

    [Hynek Schlawack: Solid Python Application Deployments For Everybody.] Watch if you deploy web apps. Hugely entertaining. You may agree or disagree but you will never be left wondering what Hynek thinks! His speaker slides have the links.

  7. NASA xGDS Analysis Notebook Built on iPython Notebook

    Fri 2013-03-15 17:09

    The NASA xGDS Analysis Notebook is a new tool I'm developing that I'm really excited about. It's built on the iPython Notebook [1], an interactive MATLAB-like shell within a web browser that lets you do all kinds of numerical analysis and data visualization with just a few lines of Python code. I'm extending it with hooks to our live database and to simple customized plotting functions so our science teams can get right into analyzing their data.

    The way the notebook works is pretty simple. It's split up into cells. You enter a command into a cell and it immediately displays the result below, then gives you a new cell.

    Here's an example using NASA RESOLVE rover data where I quickly wrote a plotting function that queries records from two different database tables (neutron spectrometer data and position data), resamples both data sets to the same time frequency, aligns them by timestamp, and plots the result as a colored trail on a map. Even if you're not familiar with the functions we're calling, you can see these tools let us work quickly at a high level without sweating the details.

    RESOLVE neutron spectrometer signal plotted as a colored rover trail on a map.

    RESOLVE neutron spectrometer signal plotted as a colored rover trail on a map.

    I'll have a lot to say about the analysis notebook going forward, but I thought I would start by putting it in context with the rest of the xGDS project.

    xGDS develops software to support science operations, ranging from pre-mission planning to in-mission monitoring to post-mission data analysis. Until now, the data analysis part has been pretty thin. We had pre-configured displays that let you plot and map and search data, but no way to really process it.

    The analysis notebook tool will give us a lot more depth in that area. You get a very flexible numerical toolkit and you can look for unexpected correlations between different data sets, check for agreement between measurements and numerical models, whatever you want.

    We can compare the analysis notebook to our old approach of providing bulk data export to the science team in simple formats like CSV files, so they can import and analyze the data in familiar tools like Excel or MATLAB or IDL, and use specialized analysis tools developed for their discipline (like spectroscopy). We will absolutely continue supporting that and we believe it's vital, but it shouldn't be the whole story. Using the analysis notebook has some important advantages:

    • Sharing. Sharing is on by default. If you make a new plot, everyone on the science team can see it right away, and if you update it they always see the latest version.
    • Accessibility. You don't have to install expensive specialized commercial software to use the analysis notebook. Or, in fact, any software, except a web browser. That also means it's not restricted to any particular operating system. Cross-disciplinary collaboration is easier if the different disciplines can use the same tools.
    • Self-documenting. Using the notebook encourages you to document your work processes. Anyone who can read your notebook sees not just your results, but also the techniques you used to generate them, so we can all improve our process by example. At its best, a notebook is not just a plot but a kind of scientific essay about what you're trying to understand and why you needed to process the data in the way you did. This relates to some important modern progress in the philosophy of science, as embodied in the reproducible research and literate programming movements.
    • No export/import overhead. The notebook environment has "native" access to the live database, reducing the overhead of exporting data then importing into a new tool.
    • Science/engineering collaboration. Especially important to us on the xGDS team, the notebook can be a way for us to work more closely with the science team. The scientists are experts on their instruments and on data interpretation, whereas we are experts in numerical data analysis and how the database is organized. Using the notebook collaborative environment, we should be able to do better data analysis together. (Plus, scientists are cool.)

    Beyond post-mission data analysis, the other place the analysis notebook can help is as a prototyping tool for live console displays. We can work with a science team to try out lots of visualizations, pick out our favorites, then package them up for operational use as console displays that can update based on real-time data and handle the load from lots of users.

    I also got some good initial feedback--this week I showed the analysis notebook to Tony Colaprete and Rick Elphic. Rick's top comment was that we need to connect the notebook to Google Earth so we can plot things in context with other map data. C'mon, Rick, don't throw me in that briar patch!

    [1]There's actually a whole stack of libraries we're building on: iPython Notebook for the web-based shell, pandas for structured data analysis, matplotlib for plotting, and SciPy and NumPy for general numerical analysis.
  8. NASA Exploration Ground Data System (xGDS) at Google Tech Talk

    Fri 2013-03-15 16:53

    Matt Deans gave a Google Tech Talk about our NASA Exploration Ground Data Systems (xGDS) project on Tuesday. Matt is the xGDS project lead and I think this is the best overview presentation I've seen yet.

    To support the talk I finally got around to making an xGDS project web page. We've done so many interesting interfaces and supported so many cool science teams that the page really doesn't do the project justice, but it's a start...

  9. Migrating a repository from CVS and Subversion to Git with history

    Sun 2013-03-03 21:39

    Today I migrated my old ZMDP planning software from cvs and svn repositories to git. I got pretty deep into some undocumented stuff, so here are my notes.

    My basic plan was to migrate from cvs to svn using cvs2svn, then migrate from svn to git using git-svn. But there were some other problems I also needed to fix.

    First install the tools. I'm using MacPorts [1].

    $ sudo port -vn install cvs2svn
    # git-core does not include git-svn by default
    $ sudo port -vn install git-core +svn
    

    Then migrate the cvs repo to svn.

    $ cp -a ~/projects/zmdp/repository cvsrepo
    $ cvs2svn -s svnrepo cvsrepo
    

    Notice I made a backup of the cvs repository first in case cvs2svn did something bad. But no problem, that worked great and preserved the cvs history.

    Here's where my first problem comes in. Several years ago I needed to migrate this codebase from cvs to svn in a hurry and I did it the quick and dirty way, by creating a fresh svn repository from a cvs checkout, which lost the cvs history. Since then I made about ten commits in the old svn repo, and now I want to tack those commits onto the end of the cvs history that I migrated into the new svn repo.

    Long story short, it seems to be easy to merge the content of two branches in svn but hard to merge their commit history. The straightforward svn merge operation basically squashes the commit history of whatever you merge in together into one commit.

    Time for a work-around. Apply each revision of the old svn repo individually to the new repo, commit it with the correct comment, then set the svn:date property of the revision to match the original commit time.

    But first I had to set the svn settings to allow editing revision properties. Apparently svn considers that kind of thing suspicious by default. In order to allow it you have to make the pre-revprop-change hook a valid executable that exits with a successful status of 0. The web is short on examples of how to do this.

    $ cd ~/sandbox/cvs2svn/svnrepo/hooks
    $ cat <<"EOF" > pre-revprop-change
    #!/bin/bash
    exit 0
    EOF
    $ chmod +x pre-revprop-change
    

    Now apply, commit, and edit the old commits one at a time. If there were more than ten I'd have to script it, but this was just barely ok to do manually.

    $ cd ~/projects/zmdp/svnRepository/src
    $ svn log
    ------------------------------------------------------------------------
    r10 | mfsmith3 | 2010-08-16 14:48:04 -0700 (Mon, 16 Aug 2010) | 1 line
    
    updated requirements section of README, new tested compilers
    ...
    $ cd /tmp/newsvncheckout
    $ svn merge -r 9:10 file:///Users/mfsmith3/projects/zmdp/svnRepository/src
    $ svn commit -m 'updated requirements section of README, new tested compilers'
    $ svn propset svn:date '2010-08-16T21:48:04.0Z' --revprop -r HEAD \
        file:///Users/mfsmith3/sandbox/cvs2svn/svnrepo
    

    Now transition everything to git.

    $ git svn clone file:///Users/mfsmith3/sandbox/cvs2svn/svnrepo/trunk/src
    

    Things were looking good, but I made a last pass through the git commit log and noticed that the authorship was messed up--I prefer a clean name and email address in the git log but these commits just had an old UNIX username. Luckily, git has much better tools for editing the commit history than svn does.

    $ echo > filter.sh <<"EOF"
    #!/bin/sh
    git filter-branch --commit-filter '
          if [ "$GIT_AUTHOR_NAME" = "trey" ];
          then
                  GIT_AUTHOR_NAME="Trey Smith";
                  GIT_AUTHOR_EMAIL="trey.smith@gmail.com";
                  git commit-tree "$@";
          else
                  git commit-tree "$@";
          fi' HEAD
    EOF
    $ ./filter.sh
    

    All done...

    [1]The -n option to port is one I almost always use--it keeps port from aggressively trying to upgrade everything the package you're installing depends on. There's nothing like installing a minor package through port and having it download and recompile a dozen other packages including new Python and Perl interpreters because somebody bumped the patchlevel. If you're not careful, those upgrades can also orphan and break other packages. If you want to stay up to date you're probably better off running port upgrade outdated on a weekly basis.
  10. Making remote software updates more reliable by rotating versions

    Fri 2013-03-01 13:19

    How do we ensure our exploration robots keep working after software updates?

    Traditionally, space missions solve this problem by:

    • Avoiding advanced software techniques that are seen as risky. Once upon a time, "advanced software techniques" included compilers. These days, the team might enforce coding conventions like not throwing exceptions, so you have to write explicit handlers to recover from every fault.
    • Thorough code review by the mission's software engineering team. Changes are vetted by a change control board that decides whether the new functionality is worth the risk of making a change that could break the system.
    • Automated software verification. This is everything from lint to special-purpose tools that will do things like detect possible race conditions in multi-threaded software.
    • Testing in simulation. Sometimes these simulations are very elaborate. It could be a video game-like simulation of a detailed planetary surface environment for testing rover software, or a simulation of a system of valves and motors that injects faults like jammed valves or bad sensor data.
    • Building a twin of the flight hardware that stays behind on the ground to support testing software updates in a realistic hardware environment.

    These are all great techniques, but sometimes you don't have the resources to do all the testing you want. That's the situation we're facing on the NASA Planetary Lake Lander Project. Our robotic probe is a buoy sitting in a high-altitude lake near Santiago, Chile. It collects water quality, weather, and health data and beams daily updates back to us via a BGAN satellite link.

    The lake is pretty remote. During the Southern Hemisphere summer we can send an engineer to service the probe about once a month, but during the winter the lake is snowed in. If we lose contact with the probe we could end up losing months' worth of data.

    One of PLL's research goals is to use the probe to conduct adaptive science, where it intelligently chooses what measurements to take and what data to beam back based on scientific relevance. Ideally, we'll build the adaptive science system over the course of this year and deploy incremental updates to the probe as we bring new features online. But if any of the updates break our core software services that schedule activities and connect to the satellite network, there would be no way for us to remotely contact the probe to fix it.

    Rotating Versions

    We've hit on a simple technique we hope will help recover from that problem. When we install a new version of the software, we'll keep older versions around as well. Instead of totally switching over to the new version, we actually set up a rotation where we switch back and forth between versions every few days. That way, if the new version is broken and doesn't allow us to make contact with the probe, eventually the old version will rotate back in and we'll regain contact. On the other hand, if the new version works out really well, we can manually step in to disable the rotation before it switches back.

    Some key considerations for this strategy:

    • First, do no harm. The world is full of "safety systems" that actually make things less reliable. We put lots of sanity checks into our rotation software, and if it finds any surprises, it gives up and doesn't change anything.
    • Rotation makes the most sense if you have a critical need (eventually make contact), so you want to try everything, but none of the options is likely to cause permanent damage to the system. Luckily, our probe is moored in place on the lake, so it's hard to get into trouble.
    • There is no silver bullet. Even with rotation in place we plan to carefully verify each new software version to the extent that we can.
    • It's important to keep the software that does the version rotations simple and independent from the rest of the software. Don't rotate the rotation script!

    Time will tell if this is a good strategy. It would be interesting to hear who else has hit on the same idea.

continue   →