Thursday, July 20, 2017

A 173 mile (278km) all-electronics, FSO (Free Space Optical) contact: Part 1 - Scouting it out

Nearly 10 years ago - in October, 2007, to be precise - we (exactly "who" to be mentioned later) successfully managed a 173 mile, Earth-based all-electronic two-way contact between two remote mountain ranges in western Utah.

For many years before this I'd been mulling over in the back of my mind various ways that optical ("lightbeam") communications could be accomplished over long distances.  Years ago, I'd observed that even a modest, 2 AA-cell focused-beam flashlight could be easily seen over a distance of more than 30 miles (50km) and that sighting even the lowest-power Laser over similar distances was fairly trivial - even if holding a steady beam was not.  Other than keeping such ideas in the back of my head, I never really did more that this - at least until the summer of 2006, when I ran across a web site that intrigued me, the "Modulated Light DX page" written by Chris Long (now amateur radio operator VK3AML) and Dr. Mike Groth (VK7MJ).  While I'd been following the history and progress of such things all along, this and similar pages rekindled the intrigue, causing me to do additional research and I began to build things.

Working up to the distance...

Over the winter of 2006-2007 I spent some time building, refining, and rebuilding various circuits having to do with optical communications.  Of particular interest to me were circuits used for detecting weak optical signals and it was those that I wanted to see if I could improve.  After considerable experimentation, head-scratching, cogitation, and testing, I was finally able to come up with a fairly simple optical receiver circuit that was at least 10dB more sensitive than other voice-bandwidth circuits that were out there.  Other experimentation was done on modulating light sources and the first serious attempt at this was building a PIC-based PWM (Pulse-Width Modulation) circuit followed, somewhat later, by a simpler current-linear modulator - both being approaches that seemed to work extremely well.

After this came the hard part:  Actually assembling the mechanical parts that made up the optical transceivers.  I decided to follow the field-proven Australian approach of using large, plastic, molded Fresnel lenses in conjunction with high-power LEDs for the source of light emissions with a second parallel lens and a photodiode for reception and the stated reasons for taking this approach seemed to me to be quite well thought-out and sound - both technically and practically.  This led to the eventual construction of an optical transceiver that consisted of a pair of identical Fresnel lenses, each being 318 x 250mm (12.5" x 9.8") mounted side-by-side in a rigid, wooden enclosure comprising an optical transceiver with parallel transmit and receive "beams."  In taking this approach, proper aiming of either the transmitter or receiver would guarantee that the other was already aimed - or very close to being properly aimed - requiring only a single piece of gear to be deployed with precision.

After completing this first transceiver I hastily built a second transceiver to be used at the "other" end of test path.  Constructed of foam-core posterboard, picture frames and inexpensive, flexible vinyl "full-page" magnifier Fresnel lenses, this transceiver used, for the optical emitter and transmitter assemblies, my original, roughly-repackaged prototype circuits.  While it was neither pretty or capable of particularly high performance, it filled the need of being the "other" unit with which communications could be carried out for testing:  After all, what good would a receiver be if there were no transmitters?

On March 31, 2007 we completed our first 2-way optical QSO with a path that crossed the Salt Lake Valley, a distance of about 24 km (15 miles.)  We were pleased to note that our signals were extremely strong and, despite the fact that our optical path crossed directly over downtown Salt Lake City, they seemed to have 30-40dB signal-noise ratio - if you ignored some 120 Hz hum and the occasional "buzz" from an unseen, failing streetlight.  We also noted a fair amount of amplitude scintillation, but this wasn't too surprising considering that the streetlights visible from our locations also seemed to shimmer being subject to the turbulence caused by the ever-present temperature inversion layer in the valley.

Bolstered by this success we conducted several other experiments over the next several months, continuing to improve and build more gear, gain experience, and refine our techniques.  Finally, for August 18, 2007, we decided on a more ambitious goal:  The spanning of a 107-mile optical path.  By this time, I'd completed a third optical transceiver using a pair of larger (430mm x 404mm, or 16.9" x 15.9") Fresnel lenses, and it significantly out-performed the "posterboard" version that had been used earlier.  On this occasion we were dismayed by the amount of haze in the air - the remnants of smoke that had blown into the area just that day from California wildfires.  Ron, K7RJ and company (his wife Elaine, N7BDZ and Gordon, K7HFV) who went to the northern end of the path (near Willard Peak, north of Ogden, Utah) experienced even more trials, having had to retreat on three occasions from their chosen vantage point due to brief, but intense thunderstorms.  Finally, just before midnight, a voice exchange was completed with some difficulty - despite the fact that they never could see the distant transmitter with the naked eye due to the combination of haze and light pollution - over this path, with the southern end (with Clint, KA7OEI and Tom, W7ETR) located near Mount Nebo, southeast of Payson, Utah.

Figure 1:
The predicted path projected onto a combination
map and satellite image.  At the south end
(bottom) is Swasey Peak while George Peak is
indicated at the north.
Click on the image for a larger version.
Finding a longer path:


Following the successful 107-mile exchange we decided that it was time to try an even-greater distance.  After staring at maps and poring over topographical data we found what we believed to be a 173-mile line-of-sight shot that seemed to provide reasonable accessibility at both ends - see figure 1.  This path spanned the Great Salt Lake Desert - some of the flattest, desolate, and most remote land in the continental U.S.  At the south end of this path was Swasey Peak, the tallest point in the House range, a series of mountains about 70 miles west of Delta, in west-central Utah.  Because Gordon had hiked this peak on more than one occasion we were confident that this goal was quite attainable.

At the north end of the path was George Peak in the Raft River range, an obscure line of mountains that run east and west in the extreme northwest corner of Utah, just south of the Idaho boarder.  None of us had ever been there before, but our research indicated that it should be possible to drive there using a high-clearance 4-wheel drive vehicle so, on August 25, 2007, Ron and Gordon piled into my Jeep (along with a 2nd spare tire swiped from Ron's Jeep as recommended by more than one account) and we headed north to investigate.

Getting there:

Following the Interstate highway nearly to the Idaho border, we turned west onto a state highway, following it as the road swung north into Idaho, passing the Raft River range, and we then turned off onto a gravel road to Standrod, Utah.  In this small town (a spread-out collection of houses, really) we turned onto a county road that began to take us up canyons on the northern slope of the range.  As we continued to climb, the road became rougher and we resorted to peering at maps and using our intuition to guide us onto the one road that would take us to the top of the mountain range.

Luckily, our guesses were correct and we soon found ourselves at the top of the ridge.  Traveling for a short distance, we ran into a problem:  The road stopped at a fence gate that was plastered with "No Trespassing" signs.  At this point, we simply began to follow what looked like road that paralleled the fence only to discover, after traveling several hundred feet - and past a point at which we could safely turn around - that this "road" had degenerated into a rather precarious dirt path traversing a steep slope.  After driving several hundred more feet, fighting all the while to keep the Jeep on the road and moving in a generally forward direction, the path leveled out once again and rejoined what appeared to be the main road.  After a combination of both swearing at and praising deities we vowed that we would nevertravel on that "road" again and simply stay on what had appeared to have been the main road, regardless of what the signs on the gates said!

Looking for Swasey Peak:

Having passed these trials, we drove along the range's ridge top, looking to the south.  On this day, the air was quite hazy - probably due to wildfires that were burning in California, and in the distance we could vaguely spot, with our naked eyes, the outline of a mountain range that we thought to be the House range:  In comparing its outline and position with a computer-simulated view, it "looked" to be a fairly close match as best as we could guess.

Upon seeing this distant mountain we stopped to get a better look, but when we looked through binoculars or a telescope the distant outline seemed to disappear - only to reappear once again when viewed with the naked eye.  We finally realized what was happening:  Our eyes and brain are "wired" to look at objects, in part, by detecting their outlines, but in this case the haze reduced the contrast considerably.  With the naked eye, the distant mountain was quite small but with the enlarged image in the binoculars and telescope the apparent contrast gradient around the object's outline was greatly diminished.  The trick to being able to visualize the distant mountain turned out be keeping the binoculars moving as our eyes and brain are much more sensitive to slight changes in brightness of moving objects than stationary ones.  After discovering this fact, we noticed with some amusement that the distant mountain seemed to vanish from sight once we stopped wiggling the binoculars only to magically reappear when we moved them again.  For later analysis we also took pictures at this same location and noted the GPS coordinates.

Continuing onwards, we drove along the ridge toward George Peak.  When we got near the GPS coordinates that I had marked for the peak we were somewhat disappointed - but not surprised:  The highest spot in the neighborhood, the peak, was one of several gentle, nondescript hills that rose above the road only by a few 10's of feet.  Stopping, we ate lunch, looked through binoculars and telescopes, took pictures, recorded GPS coordinates, and thought apprehensively about the return trip along the road.
Figure 2:
The predicted line-of-sight view (top) based on 1 arc-second SRTM terrain data between the Raft River range
and Swasey peak as seen from the north (Raft River) side.
On the bottom is an actual photograph of the same scene at the location used in the simulated view.  As can be seen,
more of the distant mountain can be seen than the prediction would indicate, this being due to the refraction of
the atmosphere slightly extending the visible horizon.  Under typical conditions, this "extension" amounts to
an increase of approximately 10/9th of the distance than geometry would predict.  This lower picture was produced
by "stacking" multiple images using software designed for astronomy.
Click on the image for a larger version.

Returning home:

Retracing our path - but not taking the "road" that had paralleled the fence line - we soon came to the gate that marked the boundary of the private land.  While many of the markings were the same at this gate, we noticed another sign - one that had been missing from the other end of the road - indicating that this was, in fact, a public right-of-way plus the admonition that those traveling through must stay on the road.  This sign seemed to register with what we thought we'd remembered about Utah laws governing the use of such roads and our initial interpretation of the county parcel maps:  Always leave a gate the way you found it, and don't go off the road!  With relief, we crossed this parcel with no difficulty and soon found ourselves at the other gate and in familiar territory.

Retracing our steps down the mountain we found ourselves hurtling along the state highway a bit more than an hour later - until I heard the unwelcome sound of a noisy tire.  Quickly pulling over I discovered that a large rock that had embedded itself in the middle of the tread of a rear tire.  After 45 minutes of changing the tire and bringing the spare up to full pressure, we were again underway - but with only one spare remaining...

Analyzing the path:

Upon returning home I was able to analyze the photographs that I had taken.  Fortunately, my digital SLR camera takes pictures in "Raw" image mode, preserving the digital picture without loss caused by converting it to a lossy format like JPEG.  Through considerable contrast enhancement, the "stacking" of several similar images using an astronomical photo processing program and making a comparison against computer-generated view I discovered that the faint outline that we'd seen was not Swasey Peak but was, in fact, a range that was about 25 miles (40km) closer - the Fish Springs mountains - a mere 150 or so miles (240km) away.  Unnoticed (or invisible) at the time of our mountaintop visit was another small bump in the distance that was, in fact, Swasey Peak.

Interestingly, the first set of pictures were taken at a location that, according to the computer analysis, was barely line-of-sight with Swasey Peak.  At the time of the site visit we had assumed that the just-visible mountain that we'd seen in the distance was Swasey Peak and that there was some sort of parallax error in the computer simulation, but analysis revealed that not only was the computer simulation correct in its positioning of the distant features, but also that the apparent height of Swasey Peak above the horizon was being enhanced by atmospheric refraction - a property that the program did not take into account:  Figure 2 shows a comparison between the computer simulation and an actual photograph taken from this same location.


Building confidence - A retry of the 107-mile path:

Having verified to our satisfaction that we could not only get to the top of the Raft River mountains but also that we also had a line-of-sight path to Swasey Peak, we began to plan for our next adventure.  Over the next several weeks we watched the weather and the air - but before we did this, we wanted to try our 107-mile path again in clearer weather to make sure that our gear was working, to gain more experience with its setup and operation, and to see how well it would work over a long optical path given reasonably good seeing conditions:  If we had good success over a 107-mile path we felt confident that we should be able to manage a 173-mile path.

A few weeks later, on September 3, we got our chance:  Taking advantage of clear weather just after a storm front had moved through the area we went back to our respective locations - Ron, Gordon and Elaine at Inspiration Point while I went (with Dale, WB7FID) back to the location near Mt. Nebo.  This time, signal-to-noise ratios were 26dB better than before and voice was "armchair" copy.  Over the several hours of experimentation we were able to transmit not only voice, but SSTV (Slow-Scan Television) images over the LED link - even switching over to using a "raw" Laser Pointer for one experiment and a Laser module collimated by an 8" reflector telescope in another.

With our success on the clear-weather 107-mile path we waited for our window to attempt the 173-mile path between Swasey and George Peak but in the following weeks we were dismayed by the appearance of bad weather and/or frequent haze - some of the latter resulting from the still-burning wildfires around the western U.S.

To be continued!

[End]

This page was stolen from "ka7oei.blogspot.com"

Wednesday, June 21, 2017

Odd differences between two (nearly) identical PV systems

I've had my 18-panel (two groups of 9) PV (solar) electric system in service for about a year and recently I decided to expand it a bit after realizing that I could do so, myself, for roughly $1/watt, after tax incentives.  An so it was done, with a bit of help from a friend of mine who is better at bending conduit than I:  Another inverter and 18 more solar panels were set on the roof - all done using materials and techniques equal to or better than that which was originally done in terms of both quality and safety.

Adding to the old system:

The older inverter, a SunnyBoy SB 5000-TL, is rated for a nominal 5kW and with its 18 panels, 9 of each located on opposite faces of my east/west facing roof (the ridge line precisely oriented to true north-south) would, in real life, produce more than 3900 watts for only an hour or so around "local noon" on late spring/early fall summer days that were both exquisitely clear and very cool (e.g. below 70F, 21C).  I decided that the new inverter need not be a 5kW unit, so I chose the newer - and significantly less expensive SunnyBoy SB3.8 - an inverter nominally rated at 3.8kW.  The rated efficiencies of the two inverters were pretty much identical - both in the 97% range.

One reason for choosing this lower-power inverter was to stay within the bounds of the rating of my main distribution panel.  My older inverter, being rated for 5kW was (theoretically) capable of putting 22-25 amps onto the panel's bus, so a 30 amp breaker was used on that branch circuit while the new inverter, capable of about 16 amps needed only a 20 amp breaker.  This combined, theoretical maximum of 50 amps (breaker current ratings, not practical, real-world current from the inverters and their panels!) was within the "120% rule" of my 125 amp distribution panel with its 100 amp breaker:  120% of 125 amps is 150 amps, so my ability to (theoretically) pull 100 amps from the utility and the combined capacity of the two inverters (again, theoretically) being 50 amps was within this rating.

For the panels I installed eighteen 295 watt Solarworld units - a slight upgrade over the older 285 watt Suniva modules already in place. In my calculations I determined that even with the new panels having approximately 3.5% more rated output (e.g. a peak of 5310 watts versus 5130 watts, assuming ideal temperature and illumination - the latter being impossible with the roof angles) that the new inverter would "clip" (e.g. it would hit its maximum output power while the panels were capable of even more power) only a dozen or two days per year - and this would occur for only an hour or so at most on each occasion.  Since the ostensibly "oversized" panel array would be producing commensurately more power at times other than peak as well, I was not concerned about this occasional "clipping".

What was expected:

The two sets of panels, old and new, are located on the same roof, with the old being higher, nearer the ridge line and the new being just below.  In my situation I get a bit of shading in the morning on the east side, but none on the west side and the geometry of the trees that do this cause the shading of both the new and old systems to be almost identical.

With this in mind, I would have expected the two systems to behave nearly identically.

But they don't!

Differences in produced power:

Having the ability to obtain graphs of each system over the course of a day I was surprised when the production of the two, while similar, showed some interesting differences as the chart below shows. 


The two systems, with nearly identical PV arrays.  The production of the older SB5000 inverter with the eighteen 285 watt panels is represented by the blue line while the newer SB3.8 inverter with eighteen 295 watt panels is represented by the red line:  Each system has nine east-facing panels and nine west-facing panels.  The dips in the graph are due to loss of solar irradiance due to clouds.  Because the data for this graph is collected every 15 minutes, some of the fine detail is lost so the "dip" in production at about 1:45PM was probably deeper than shown.
The total production of the SB3.8 system (red line) for the day was 27.3kWh while that of the SB5000TL system (blue line) was 25.4kWh - a difference of about 7% overall.
Click on the image for a larger version.
In this graph the blue line is the older SB5000TL inverter and the red line is the newer SB3.8 inverter.  Ideally, one would expect that that the newer inverter, with its 295 watt panels, would be just a few percent higher than the older inverter with its 285 watt panels, but the difference, particularly during the peak hours, is closer to 10%, particularly during the peak times.

What might be the cause of this difference?

Several possible explanations come to mind:
  1. The new panels are producing significantly more than their official ratings.  A few percent would seem likely, but 10%?
  2. The older panels have degraded more than expected in the year that they have been in service.
  3. The two manufacturers rate their panels differently.
  4. There may be thermal differences.  The "new" panels are lower on the roof and it is possible that the air being pulled in from the bottom by convection is cooler when it passes by the new panels, being warmer by the time it gets to the "old" panels.  If we take at face value that 3.5% of the 10% difference is due to the rating - leaving 6.5% difference unaccounted, this would need only about a 16C (39F) average panel temperature difference, but the temperature differences do not appear to be that large!
  5. The new panels don't heat as much as the old.  The new panels, in the interstitial gap between individual cells and around the edges are white while the old panels are completely black, possibly reducing the amount of heating.
  6. The new inverter is better at optimizing the power from the panels than the old one.
It's a bit difficult to make absolute measurements, but in the case of #2, the possibility of the "old" panels degrading, I think that I can rule that out.  In comparing the peak production days for 2016 and 2017, both of which occurred in early May (a result of the combination of reasonably long days and cool temperatures) the peak was about the same - approximately 28.25kWh on the "old" system even after I'd installed the "new" panels on the east side.

I suspect that it is a combination of several of the above factors, probably excluding #2, but I have no real way of knowing the amount of contribution of each of the factors.  What is surprising to me is that I have yet to see any obvious clipping on the new system, so it may be that my calculation of "several dozen of hours" per year where this might happen is about right.

I'll continue to monitor the absolute and relative performance of the two sets of panels to see how they track over time.

[End]

This page stolen from "ka7oei.blogspot.com"

Tuesday, June 13, 2017

Adding a useful signal strength indication to an old, inexpensive handie-talkie for transmitter hunting

A field strength meter is a very handy tool for locating a transmitter.  A sensitive field strength meter by itself has some limitations, however:  It will respond to practically any RF signal that enters its input.  This property has the effect of limiting the effective sensitivity of the field strength meter, as any nearby RF source (or even ones far away, if the meter is sensitive enough...) will effectively mask the desired signal.
Figure 1:
The modified HT with a broadband field strength meter
paired with the AD8307-based field strength meter
mentioned and linked in the article, below.
Click on the image for a larger version.

This property can be mitigated somewhat by preceding the input of the meter with a simple tuned RF stage and, in most cases, this is adequate for finding (very) nearby transmitters.  A simple tuned circuit does have its limitations, however:
  • It is only broadly selective.  A simple, single-tuned filter will have a response encompassing several percent (at best) of the operating frequency.  This means that a 2 meter filter will respond to nearly any signal near or within to the 2 meter band.
  • A very narrow filter can be tricky to tune.  This isn't usually too much of a problem as one can peak on the desired signal (if it is close enough to register) or use your own transmitter (on the same or nearby frequency) to provide a source of signal on which the filter may be tuned.
  • The filter does not usually enhance the absolute (weak signal) sensitivity unless an amplifier is used.
An obvious approach to solving this problem is to use a receiver, but while many FM receivers have "S-meters" on them, very few of them have meters that are truly useful over a very wide dynamic range, most firmly "pegging" even on relatively modest signals, making them nearly unusable if the signal is any stronger than "medium weak".  While an adjustable attenuator (such as a step attenuator or offset attenuator) may be used, the range of the radio's S-meter itself may be so limited that it is difficult to manage the observation of the meter and adjusting the signal level to maintain an "on-scale" reading.

Another possibility is to modify an existing receiver so that an external signal level meter with much greater range may be connected.

Picking a receiver:

When I decided to take this approach I began looking for a 2 meter (the primary band of interest) receiver with these properties:
  • It had to be cheap.  No need to explain this one!
  • It had to be synthesized.  It's very helpful to be able to change frequencies.
  • Having a 10.7 MHz IF was preferable.  The reasons for this will become apparent.
  • It had to have enough room inside it to allow the addition of some extra circuitry to allow "picking off" the IF signal.  After all, that's the entire point of this exercise.
  • It had to be easy to use.  Because one may not use this receiver too often, it's best not to pick something overly complicated and would require a manual to remind one how to do even the simplest of tasks.
  • The radio would still be a radio.  Another goal of the modification was that the radio had to work exactly as it was originally designed after you were done - that is, you could still use it as a transceiver!
Based on a combination of past familiarity with various 2 meter HTs and looking at prices on Ebay, at least three possibilities sprang to mind:
  • The Henry Tempo S-1.  This is a very basic 2 meter-only radio and was the very first synthesized HT available in the U.S.  One disadvantage is that, by default, it uses a threaded antenna connection rather than a more-standard BNC connector and would thus require the user to install one to allow it to be used with other types of antennas.  Another disadvantage is that it has a built-in non-removable battery.  It's power supply voltage is limited to under 11 volts.  (The later Tempo S-15 has fewer of these disadvantages and may be better, but I am not too familiar with it.)
  • The Kenwood TH-21.  This, too, is a very basic 2 meter-only radio.  It uses a strange RCA (e.g. phono) like threaded connector, but this mates with easily-available RCA-BNC adapters.  Its disadvantage is that it is small enough that the added circuitry may not fit inside.  It, too, has a distinct limitation on its power supply voltage range and requires about 10 volts.
  • The Icom IC-2A/T.  This basic radio was, at one time, one of the most popular 2 meter HTs which means that there are still plenty of them around.  It can operate directly on 12 volts, has a standard BNC antenna connector, and has plenty of room inside the case for the addition of a small circuit.
Each of these radios is a thumbwheel-switch tuned, synthesized, plain-vanilla radio. I chose the Icom IC-2AT (it is also the most common) and obtained one on Ebay for about $40 (including accessories) and another $24 bought a clone of an IC-8, an 8-cell alkaline battery holder (from Batteries America) that is normally populated with 2.5 amp-hour NiMH AA cells.  With its squelched receive current of around 20 milliamps I will often use this radio as a "listen around the house" radio since it will run for days and days!

"Why not use one of those cheap chinese radios?"

Upon reading this you may be thinking "why spend $$$ on an ancient radio when you can buy a cheap chinese radio that has lots of features for $30-ish?"

The reason is that these radios have neither a user-available "S" meter with good dynamic range or an accessible IF (Intermediate Frequency) stage.  Because these radios are, in effect, direct conversion with DSP magic occurring on-chip, there is absolutely nowhere that one could connect an external meter - because that signal simply does not exist!

While many of these "single-chip" radios do have some built-in S-meter circuitry, the manufacturers of these radios have, for whatever reason, not made it available to the user - at least not in a format that would be particularly useful for transmitter hunting.
Modifying the IC-2A/T (and circuit descriptions):

This radio is the largest of those mentioned above and has a reasonable amount of extra room inside its case for the addition of the few small circuits needed to complete the modification.  When done, this modification does not, in any way, affect otherwise normal operation of the radio:  It can still be used as it was intended!

An added IF buffer amplifier:

This radio uses the Motorola MC3357 (or an equivalent such as the MP5071) as the IF/demodulator.  This chip takes the 10.7 MHz IF from the front-end mixer and 1st IF amplifier stages and converts it to a lower IF (455 kHz) for further filtering and limiting and it is then demodulated using a quarature detector.  Unfortunately, the MC3357 lacks an RSSI (Receive Signal Strength Indicator) circuit - which partly explains why this radio doesn't have an S-meter, anyway.  Since we were planning to feed a sample of the IF from this receiver into our field strength meter, anyway, this isn't too much of a problem.

Figure 2:
The source-follower amplifier tacked atop the IF amplifier chip.
Click on the image for a larger version.
We actually have a choice to two different IFs:  10.7 MHz and 455 kHz.  At first glance, the 455 kHz might seem to be a better choice as it has already been amplified and it is at a lower frequency - but there's a problem:  It compresses easily.  Monitoring the 455 kHz line, one can easily "see" signals in the microvolt range, but by the time you get a signal that's in the -60 dBm range or so, this signal path is already starting to go into compression.  This is a serious problem as -60 dBm is about the strength that one gets from a 100 watt transmitter that is clear line-of-sight at about 20 miles distant, using unity-gain antennas on each end.

The other choice is to tap the signal at the 10.7 MHz point, before it goes into the MC3357.  This signal, not having been amplified as much as the 455 kHz signal, does not begin to saturate until the input reaches about -40 dBm or so, reaching full saturation by about -35 dBm.  One point of concern here was the fact that at this point, the signal has less filtering than the 455 kHz, with the latter going through a "sharper" bandpass filter.  While the filtering at 10.7 MHz is a bit broader, the 4 poles of crystal filter do attenuate  a signal 20 kHz away by at least 30 dB - so unless there's another very strong signal on this adjacent channel, it's not likely that there will be a problem.  As it turns out, the slightly "broader" response of the 10.7 MHz crystal filters is conducive to "offset tuning" - that is, deliberately tuning the radio off-frequency to reduce the signal level reading when you are nearby the transmitter being sought.


To be able to tap this signal without otherwise affecting the performance of the receive requires a simple buffer amplifier, and a JFET source-follower does the job nicely (see figure 6, below for the diagram).  Consisting of only 6 components (two resistors, three capacitors and an MPF102 JFET - practically any N-channel JFET will do) this circuit is simply tack-soldered directly onto the MC3357 as shown in figures 2 and 3.  This circuit very effectively isolates the (more or less) 50 ohm output load of the field strength meter from the high-impedance 10.7 MHz input to the MC3357 and it does so while only drawing about 700 microamps, which is only 3-4% of the radio's total current when it is squelched.

Figure 3:
A wider view of the modifications to the radio.
Click on the image for a larger version.
As can be seen from the pictures (figure 2 and 3) all of the required connections were made directly to the pins of the IC itself, with the 330 pF input capacitor connecting directly to pin 16.  The supply voltage is pulled from pin 4, and pins 12 and/or 15 are used for the ground connection. 

A word of warning:  Care should be taken when soldering directly to the pins of this (or any) IC to avoid damage.  It is a good idea to scrape the pin clean of oxide and use a hot soldering iron so that the connection can be made very quickly.  Excess heat and/or force on the pin can destroy the IC!  It's not that this particular IC is fragile, but this is care that should be taken.

Getting the IF signal outside the radio:

The next challenge was getting our sampled 10.7 MHz IF energy out of the radio's case.  While it may be possible to install another connector on the radio somewhere, it's easiest to use an existing connector - such as the microphone jack.

One of the goals of these modifications was to retain complete function of the radio as if it were a stock radio, so I wanted to be sure that the microphone jack would still work as designed, so I needed to multiplex both the microphone audio (and keying) and the IF onto the tip of the microphone connector as I wasn't really planning to use the signal meter and a remote microphone at the same time.  Because of the very large difference in frequencies (audio versus 10.7 MHz) it is very easy to separate the two using capacitors and an inductor:  The 10.7 MHz IF signal is passed directly to the connector with the series capacitor while the 10.7 MHz IF signal is blocked from the microphone/PTT line with a small choke:  Anything from 4.7uH to 100uH will work fine.
Figure 4:
The modifications at the microphone jack.
Click on the image for a larger version.

The buffered IF signal is conducted to the microphone jack using some small coaxial cable:  RG-174 type will work, but I found some slightly smaller coax in a junked VCR.  To make the connections, the two screws on the side of the HT's frame were removed, allowing it to "hinge" open, giving easy access to the microphone connector.  The existing microphone wire connected to the "tip" connection was removed and the choke was placed in series with it, with the combination insulated with some heat-shrinkable tubing.

The coax from the buffer amp was then connected directly to the "tip" of the microphone connector.  One possible coax routing is shown in Figure 4 but note that this routing prevents the two halves of the chassis from being fully opened in the future unless it is disconnected from one end.  If this bothers you, a longer cable can be routed so that it follows along the hinge and then over to the buffer circuit.  Note:  It is important to use shielded cable for this connection as the cable is likely to be routed past the components "earlier" in the IF strip and instability could result if there is coupling.

Interfacing with the Field Strength meter:

Using RG-174 type coaxial cable, an adapter/interface cable was constructed with a 2.5mm connector on one end and a BNC on the other.  One important point is that a small series capacitor (0.001uF) is required in this line somewhere as a DC block on the microphone connector:  The IC-2A/T (like most HTs) detects a "key down" condition on the microphone by detecting a current flow on the microphone line and this series capacitor prevents current from flowing through the 50 ohm input termination on the field strength meter and "keying" the radio.

Dealing with L.O. leakage:

As soon as it was constructed I observed that even with no signal, the field strength meter showed a weak signal (about -60 to -65 dBm) present whenever the receiver was turned on, effectively reducing sensitivity by 20-25 dB.  As I suspected when I first noticed it, this signal was coming from two places:
  • The VHF local oscillator.  On the IC-2A/T, this oscillator operates 10.7 MHz lower than the receive frequency.
  • The 2nd IF local oscillator.  On the IC-2A/T this oscillator operates at 10.245 MHz - 455 kHz below the 10.7 MHz IF as part of the conversion to the second IF.
The magnitude of each of these signals was about the same, roughly -65 dBm or so.  The VHF local oscillator would be very easy to get rid of -  A very simple lowpass filter (consisting of a single capacitor and inductor) would adequately suppress it - but the 10.245 MHz signal poses a problem as it is too close to 10.7 MHz to be easily attenuated enough by a very simple L/C filter without affecting it.

Figure 5:
The inline 10.7 MHz bandpass using filter using a ceramic
filter.  The diagram for this may be seen in the upper-right
corner of Figure 6, below.
Click on the image for a larger version.
Fortunately, with the IF being 10.7 MHz, we have another (cheap!) option:  A 10.7 MHz ceramic IF filter.  These filters are ubiquitous, being used in nearly every FM broadcast receiver made since the 80s, so if you have a junked FM broadcast receiver kicking around, you'll likely have one or more of these in them.  Even if you don't have junk with a ceramic filter in it, they are relatively cheap ($1-$2) and readily available from many mail-order outlets.  This filter is shown in the upper-right corner of the diagram in Figure 6, below.

The precise type of filter is not important as they will typically have a bandpass that is between 150 kHz and 300 kHz wide (depending on the application) at their -6 dB points and will easily attenuate the 10.245 MHz local oscillator signal by at least 30 dB.  With this bandwidth it is possible to use a 10.7 MHz filter (which, themselves, vary in exact center frequency) for some of the "close - but not exact" IF's that one can often find near 10.7 MHz like 10.695 or 10.75 MHz.  The only "gotcha" with these ceramic filters is that their input/output impedances are typically in the 300 ohm area and require a (very simple) matching network (an inductor and capacitor) on the input and output to interface them with a 50 ohm system.  The values used for matching are not critical and the inductor, ideally around 1.8uH, could be anything from 1.5 to 2.2 uH without much impact of performance other than a very slight change in insertion loss.

While this filter could have been crammed into the radio, I was concerned that the L.O. leakage might find its way into the connector somehow, bypassing it.  Instead, this circuit was constructed "dead bug" on a small scrap of circuit board material with sides, "potted" in thermoset ("hot melt") glue and it can covered with electrical tape, heat shrink tubing or "plastic dip" compound, with the entire circuit installed in the middle of the coax line (making a "lump.")  Alternatively, this filter could have been installed within the field strength meter itself, either on its own connector or sharing the main connector and being switchable in/out of the circuit.

Figure 6:
The diagram, drawn in the 1980s Icom style, showing the modified circuity and details of the added source-follower JFET amplifier (in the dashed-line box) along with the 10.7 MHz bandpass filter (upper-right) that is built into the cable.
Click on the image for a larger version.
With this additional filtering the L.O. leakage is reduced to a level below the detection threshold of the field strength meter, allowing sub-microvolt signals to be detected by the meter/radio combination.

Operation and use:

When using this system, I simply clip the radio to my belt and adjust it so that I can listen to what is going on.

There's approximately 30 dB of processing gain between the antenna to the 10.7 MHz IF output - that is, a -100 dBm signal on the antenna on 2 meters will show up as a -70 dBm signal at 10.7 MHz.  What this means is that sub-microvolt signals are just detectable at the bottom end of the range of the Field Strength meter.  From a distance, a simple gain antenna such as a 3-element "Tape Measure Yagi" (see the article "Tape Measure Beam Optimized for Direction Finding - link) will establish a bearing, the antenna's gain providing both an effective signal boost of about 7dB (compared to an isotropic) and directivity.

While driving about looking for a signal I use a multi-antenna (so-called) "Doppler" type system with four antennas being electrically rotated to get the general bearings with the modified IC-2AT being the receiver in that system.  With the field strength meter connected I can hear its audio tone representing the signal strength without need to look at it.  As I near the signal source and the strength increases, I have both the directional indication and the rising pitch of the tone as dual confirmation that I am approaching it.

The major advantage of using the HT as tunable "front end" of the field strength meter means that the meter has greatly enhance selectability and sensitivity - but this is not without cost:  As noted before, this detection system will begin to saturate at about -40 dBm, fully saturating above -35 dBm - which is a "moderately strong" signal.  In "hidden-T" terms, it will "peg" when within a hundred feet or so of a 100 mW transmitter with a mediocre antenna.

When the signals become this strong, you can do one of several things:
  • Detune the receiver by 5, 10, 15 or even 20 kHz.  This will reduce the sensitivity by moving the signal slightly out of the passband of the 10.7 MHz IF filters.  This is usually a very simple and effective technique, although heavy modulation can cause the signal strength readings to vary.
  • Add attenuation to the front-end of the receiver.  The plastic case of the IC-2A/T is quite "leaky" in terms of RF ingress, but it is good enough for a 20 dB inline attenuator to work nicely and will thus extend usable range to -20 to -15 dBm.  Although I have not tried it, an "offset attenuator" may extend this even further.
  • When you are really close to the transmitter being sought you can forgo the receiver altogether, connecting the antenna directly to the field strength meter!
If you want to be really fancy, you can build the 10.7 MHz bandpass filter and add switches to the field strength meter so that you can switch the 20 dB of attenuation in and out as well as routing the signal either to the receiver, or to the field strength meter using a resistive or hybrid splitter to make sure that the receiver gets some signal from the antenna even when the field strength meter is connected to the antenna.

What to use as the field-strength meter:

The field strength meter used is one based on the Analog Devices AD8307 which is useful from below 1 MHz to over 500 MHz, providing a nice, logarithmic output over a range that goes below -70dBm to above +10dBm.  It is, however, broad as the proverbial "barn door" and the combination of this fact and that its sensitivity of "only" -70dBm is nowhere near enough to be useful with weak signals - especially if there are any other radio transmitters nearby - including radio and TV stations within a few 10s of miles/kilometers.  The integration of this broadband detector with the narrowband, tuneable receiver IF along with its gain makes for a complete system useful for signals that range from weak to strong.

The description of an audible field-strength meter may be found on the web page of the Utah Amateur Radio club in another article that I wrote, linked here:  Wide Dynamic Range Field Strength Meter - link.  One of the key elements of this circuit is that it includes an audio oscillator with a pitch that increases in proportion with the dB indication on the meter, allowing "eyes-off" assessment of the signal strength - very useful while one is walking about or in a vehicle.

There are also other web pages that describe the construction of an AD8307-based field strength meter (look for the "W7ZOI" power meter as a basis for this type of circuit) - and you can even buy pre-assembled boards on EvilBay (search on "AD8307 field strength meter").  The downside of most of these is that they do not include an audible signal strength indication to allow "eyes off" use, but this circuit could be easily added, adapted from that in the link above.

Another circuit worth considering is the venerable NE/SA605 or 615 which is, itself, a stand-alone receiver.  Of interest in this application is its "RSSI" (Receive Signal Strength Indicator) circuit which has both good sensitivity, is perfectly suited for use at 10.7 MHz,  has a nice logarithmic response and a wide dynamic range - nearly as great as the AD8307.  Exactly how one would use just the RSSI pin of this chip is beyond the scope of this article, but information on doing this may be found on the web in articles such as:
  • NXP Application note AN1996 - link (see figure 13, page 19 for an example using the RSSI function only)

Additional comments:
  • At first, I considered using the earphone jack for interfacing to the 10.7 MHz IF, but quickly realized that this would complicate things if I wanted to connect something to the jack (such as pair of headphones or a Doppler unit!) while DFing.  I decided that I was unlikely to be needing to use an external microphone while I was looking for a transmitter...
  • I haven't tried it, but these modifications should be possible with the 222 MHz and 440 MHz versions of this radio - not to mention other radios of this type.
  • Although not extremely stable, you can listen to SSB and CW transmissions with the modified IC-2A/T by connecting a general-coverage/HF receiver to the 10.7 MHz IF output and tuning +/- 10.7 MHz.  Signals may be slightly "warbly" - but they should be easily copyable!
Finally, if you aren't able to build such a system and/or don't mind spending the money and you are interested in what is possibly the best receiver/signal strength meter combination device available, look at the VK3YNG Foxhunt Sniffer - link.  This integrates a 2 meter receiver (also capable of tuning the 121.5 ELT frequency range) and a signal strength indicator capable of registering from less than -120dBm to well over +10dBm with an audible tone.

Comment:  This article is an edited/updated version of one that I posted on the Utah Amateur Radio Club site (link) a while ago.


[End]

This page stolen from "ka7oei.blogspot.com"

Wednesday, May 17, 2017

Teasing out the differences between the "AC" and "DC" versions of the Tesla PowerWall 2

Being naturally interested in such things, I've been following the announcements and information about the Tesla PowerWall 2 - the follow-on product of the (rarely seen - in the U.S., at least) "original" PowerWall.

Somewhat interestingly/frustratingly, clear, concise (and even vaguely) technical information on either version of the PowerWall 2 (yes, there are two versions - the "DC" and "AC") has been a bit difficult to find, so in my research, what have I found?

This page or its contents are not intended to promote any of the products mentioned nor should it be considered to be an authoritative source.

It is simply a statement of opinion, conjecture and curiosity based on the information publicly available at the time of the original posting.

It is certain that as time goes on that information referenced on this page may be officially verified, become commonplace, or proven to be completely wrong.

Such is the nature of life!

The "DC" PowerWall 2:
  • Data sheets (two whole pages, each - almost!) for both the DC and AC versions of the PowerWall may be found here at this link - link.
Unless you have a "hybrid" solar inverter, this one is NOT for you - and if you had such an inverter, you'd likely already know it.  A "hybrid" inverter is one that is specifically designed to pass some of the energy from the PV array (solar panels) into storage, such as a battery and used that stored energy later.

Unlike its "AC" counterpart (more on this later) this version of the PowerWall 2 does NOT appear to have an AC (mains) connection of any type - let alone an inverter (neither are mentioned in the brochure) - but rather it is an energy back-up for the solar panels on the DC input(s) of the hybrid inverter.   "Excess" power from the panels may used to charge the battery and this stored energy could be used to feed the inverter when the load (e.g. house) exceeds that available from the panels - when it is cloudy, if there is a period in which the load exceeds the output of the PV array for a period of time or there is no sun at all (e.g. night).

Whether or not this version of the PowerWall can actually be (indirectly) charged via the AC mains (e.g.  via a hybrid inverter capable of working "backwards" to produce AC from the mains) would appear to depend entirely on the capability and configuration of the hybrid inverter and the system overall.

But, you might ask,why would you ever want to charge the battery from the utility rather than from solar?  You might want to do this if there were variable tariffs in your area - say, $0.30/kWh during the peak hours in the day, but only $0.15kWh at night - in which case it would make sense supplant the "expensive" power during the day with "cheap" power bought at night to charge it up.

Whether or not this system would be helpful in a power outage is also dependent on the nature of the inverter to which it is connected:  Most grid-tie converters become useless when the mains power disappears (e.g. cannot produce any power for the consumer - more on this later) - and this applies to both "series string" (e.g. a large inverter fed by high-voltage DC from a series of panels) and the "microinverter" (small inverters at each of the panels) topologies.  Inverters configured for "island" operation (e.g. "free running" in the absence of a live power grid) or ones that can safely switch between "grid tie" and "island" modes would seem to be appropriate if you use the DC PowerWall and you want to keep your house "powered up" when there is a grid failure.

The "AC" PowerWall 2:
  • Data sheets (two whole pages, each - almost!) for both the DC and AC versions of the PowerWall may be found here - LINK.
While the "AC" version seems to have the same battery storage capacity as the "DC" version (e.g. approx. 13.5kWh) it also has an integrated inverter and charger that interfaces with the AC mains that is apparently capable of supporting any standard voltage from 100 to 277 volts, 50 or 60 Hz, split or single phase.  This inverter, rated for approximately 7kW peak and 5-ish kW continuous, is sufficient to run many households.  Multiple units may be "stacked" (e.g. connected in parallel-type configuration - up to nine of them, according to the data sheet linked above) for additional storage and capacity.
Unlike the "DC" version, all of the power inflow/outflow is via the AC power feed, which is to say, it will both output AC power via its inverter and charge its battery via that same connection.  What this means is that it need not (and cannot, really) be directly connect to the PV (photovoltaic) system at all except, possibly, via a local network to gather stats and do controlling.  What seems clear is that this version has some means of monitoring the net flow in to and out of the house to the utility which means that the PowerWall could balance this out by "knowing" how much power it could use to charge its battery, or needed to output.

(The basic diagram of Figure 1, above, shows how such a system might be connected.  This diagram does not specifically represent a PowerWall, but rather how any battery-based inverter/charger system might be used to supply back-up power to a home in the past and future.)

Because its power would be connected "indirectly" via AC power connections to the PV system it should (in theory) work with either a series-string or microinverter-type system - or, maybe even if you have no solar at all if you simply want to charge it during times of lower tariffs and pull the charge back out again during high tariffs.  (The Tesla brochure simply says "Support for wide range of usage scenarios" under the heading "Operating Modes" - which could be interpreted many ways, but I have not actually seen an "official" suggestion of a use without any sort of solar.)

What might such a system look like - schematically, at least?

How might this version of the PowerWall operate?  First, let's take a look at a diagram of how any sort of battery/inverter/charger like this might be configured for a house.
Figure 1:
Diagram of a generic battery-based "whole house" backup system based on obvious requirements.  This is a very basic diagram, showing most of the needed components that would be required to interface a battery-based inverter/charger with a typical house's electrical system and a PV (PhotoVoltaic/solar) charging system.
For those not familiar with North American power systems, typical residences are fed with 240 volt, center-tapped service from the transformer with this center-tap grounded at the service entrance.  This allows most devices to operate at 120 volts while those that consume large amounts of power (ranges, electric water heaters, electric dryers, air conditioners, etc.) are connected a circuit with 240 volts, which may or may not need the "neutral" lead at all.  In most other parts of the world there would be only "L1" and the "Neutral" operating at about 240 volts.
Click on the image for a larger version.

Referring to Figure 1, above:

Shown to the right of center is a switch that opens when the utility's power grid goes offline, isolating the house and the inverter/charger from the power grid and included in that is a voltage monitor (consisting of potential transducers, or "PTs") that can detect when the mains voltage has returned and stabilized and it is "safe" to reconnect to the grid.  The battery-based inverter/charger is connected across the house's mains so that it can both pull current from it to charge its battery as well as push power into the house in a back-up situation.

The "Net current monitoring" current transducers ("CTs") might be used to allow the inverter/charger to "zero out" the total current (and, thus power) coming in from and going out to the power grid (under normal situations) such as when its battery is being charged and extra power is being produced by the PV system, but to control the charge rate just so that only that "extra" power from the PV system is being used to assure, as much as possible, a net-zero flow to/from the utility.  The "House Current monitoring" is used to determine how much current is being used by the entire house while the "PV current monitoring" is used to determine the contribution of the PV system.

By knowing these things it is possible to determine how much excess/deficit their may be in terms of the production of the PV system with respect to actual usage by the household.  Not shown is the current monitoring that would, no doubt, be included in the inverter/charger itself.  Some of the shown current monitoring points may be redundant as this information could be determined in other ways, but are included for clarity.

Finally, a local network connection is shown for both the inverter/charger so that they may communicate with each other, perhaps for control purposes, as well as communicate via the Internet so that statistics may be monitored and recorded and to allow firmware updates to be issued.

How it might operate in practice:

As can be seen in Figure 1 and determined from the explanation, we can see that the PV is connected to the input/output of the inverter/charger (which could be a PowerWall - or any other similar system) via the house wiring which means that there is a path to the PowerWall to charge its battery, and the same path out of it when it needs to supply power along with means of monitoring power flow.

With a system akin to that depicted in Figure 1, consider these possible scenarios:
  1. Excess power is being produced by the PV system and put back into the grid and the PowerWall's battery is fully-charged. Because the battery is fully-charged there is nowhere to put this extra power so it goes back into the grid, tracked by the "Net Meter" in the same way that it would be without a PowerWall.
  2. Excess power is being produced by the PV system and put back into the grid and the PowerWall's battery is not fully charged.  It will pull the amount of "excess" power that the PV system would normally be putting into the grid and charge its own battery at that same rate resulting in a net-zero amount of power being put into the grid.
  3. More power is being consumed by the user's household than is being produced by the solar array.  Depending on the state-of-charge and configuration of the PowerWall, the power wall may produce enough power to make up for the difference between what the PV system is producing and the user needs.  At night this could (in theory) be 100% of the usage if the system were so-configured.
  4. It would be theoretically possible to configure it so that even if there was no solar, but a higher daytime than nighttime power rate, to charge overnight from the mains and put out power during the day to reduce the power costs overall.
What about a power outage?

All of the above scenarios are to be expected - and they are more-or-less standard offerings for many of the battery-based products of this type - but what if the AC mains go down?  For the rest of this discussion we will ignore the "DC" version of the PowerWall as its capability would rely on the configuration of the user's hybrid inverter and its capabilities/configuration when it comes to supplying backup, "islanded" AC power.

As mentioned before, with a typical PV system - either "series string" (one large inverter) or distributed (e.g. "microinverter") - if the power grid goes offline the PV system becomes useless:  It requires the power grid to be present to both synchronize itself and present an infinite "sink" into which it can always push all of the "extra" power power that it is producing.  Were such units to not dangerous voltages could be "back-fed" into the power grid and be a hazard to anyone who might be trying to repair it.  It is for this reason that all grid-tie inverters are, by law, required to go offline - or, at least, disconnect themselves completely from the power grid during a mains power outage.

The "AC" version of the Tesla PowerWall's system includes a switch that automatically isolates the house from the power grid when there is a power failure.  Once this switch has isolated the house from the grid the inverter built into the PowerWall can supply power to the house - at least as long as its battery lasts.

What about charging the battery during a power outage?

Here is where it seems to get a bit tricky.

If all grid-tie inverter systems go offline when the power grid fails, is it possible to use it to assist, or even charge the PowerWall during a grid failure?  In other words, can you use power from your PV system to recharge the PowerWall's battery or, at the very least, supply at least some of the power to extend its battery run-time?

In corresponding with a company representative - and corroborated by data openly published by Telsa (see the FAQ linked near the bottom of this posting) - the answer would appear to be "yes" - but exactly how this works is not very clear.

Based on rather vague information and knowing the behavior of the components involved it would seem to need to work this way:
  • The power (utility) grid goes down.
    • The user's PV system goes offline with the failure of the grid.
    • The PowerWall's switch opens, isolating the house completely from the grid - aside from the ability to monitor when the power grid comes back up.
    • The inverter in the PowerWall now takes the load of the house.
Were this all that happened, the house would again go dark once the battery in the PowerWall or similar "back-up power system" was depleted, but there seems to be more to it than this when a PV system is involved, as in:
  • When the back-up power system's inverter goes online, the PV system again sees what looks like the power grid and comes back online.
    • As it does, the back-up power system monitors the total power consumption and usage and any excess power being produced by the PV system is used to charge its battery.
    • If the PV system is producing less power than is being used, the back-up power system will supply the difference:  Its battery will still be discharged, but at a lower rate.
But now it gets even trickier and a bit more vague.

What is there is extra power being produced by the PV system?

Grid tie PV systems expect the power grid to be an infinite sink of power - but what if, during a power failure, when your backup up system is standing in as the power grid, you are producing 5kW of solar energy and your house is using only 2kW:  Where does the extra 3kW of production go if it cannot be infinitely sinked into the utility grid, and how does one keep the PV system from "tripping out" and going off line?

To illustrate the problem, let us bring up a related scenario where we have a generator instead of some sort of battery-based back-up power system system.

There is a very good reason why owners of grid-tie systems are warned against using it to "assist" a backup generator and using that generator as a substitute for the power grid.  What can happen is this:
  • The AC power goes out and the transfer switch connects the house to the generator.
  • The generator comes online and produces AC power.
  • If the AC power from the generator is stable enough (not all generators produce adequately stable power) the PV system will come back online thinking that the power grid has come back.
  • When the PV system comes back online and produces power, the generator's load decreases:  Most generator's motors will slightly speed up as the load is decreased.
    • When the generator's motor speeds up, the frequency goes high.  When this happens, the PV system will see that as unstable power and will go offline.
    • When the PV system goes off, the power is suddenly dumped on the generator and it is hit with the full load and slows back down.
  •  The cycle repeats, with the PV system and generator "fighting" each other as the PV system continually goes on and offline.
An even worse scenario is this:
  • The AC power goes out, the transfer switch connects the house to the generator.
  • The generator comes online and produces power.
  • The PV system comes up because it "sees" the generator as the power grid, but its producing, say, 5kW but the house is, at the moment, using 2kW.
  • The PV system, because it think that it is connected to the power grid, will try to shove that extra 3kW somewhere, causing one or more of the following to happen:
    • The generator to speed up as power is being "pushed" into it, its frequency wil go high and trip the PV system offline, and/or:
    • If the PV system tries to push more power into the system than there is a place for it to go (e.g. the case, above, where the solar is producing 3kW more than is being used) the voltage will necessarily go up.  Assuming that the generator doesn't "overspeed" and trip-out and the frequency doesn't go up and trip the PV system offline, the PV system will increase the voltage, trying to "push" the extra power into a load where there is nowhere for it to go:
      • As the PV system tries to "push" its excess power into the generator, it will increase the output voltage.  At some point the PV system will trip out on overvoltage, and the same "on-off" cycle mentioned above will occur.
      • It is possible that the excess power from the PV will "motor" the generator (e.g. the input power tries to "spin" the generator/motor) - an extremely bad thing to do which will probably cause it to overheat and eventually be destroyed if this goes un-checked.
      • If it is an "inverter" type generator, it can't be "motored", but the excess power will probably cause the generator's inverter to get stuck in the same "trip out/restart" cycle or simply fault out in an "overload condition - or it might even be damaged/destroyed.
If having extra power from a grid-tie inverter is so difficult to deal with, what could you do with extra power that the PV system might be producing?


What if we have excess power and nowhere to put it?

The question that comes to mind now is "What does the PV system do when the PowerWall's battery is fully-charged and there is no-where to put extra energy that might be being produced?"  Where we again bring up the situation where our PV system is producing 5kW but we are using only 2kW leaving an extra 3kW to go... where?

The answer to that question is not at all clear, but four possibilities come to mind:
  1. Divert the power elsewhere.  Some people with "island" systems utilize a feature of some solar power systems that indicate when excess power is available and use it to operate a diversion switch to shunt the excess power in an attempt to do something useful like run an electric water heater, pump water or simply produce waste heat with a large resistor bank.  Such features are usually available only on "island" systems (e.g. those that are entirely self-contained and not tied to the power grid) and with large battery banks.
  2. If it is possible, simply disable the PV system for a while and drain, say, 5-10% of the power out of the back-up power system battery before turning it back on and recharging it.  This will cause the PV system to cycle on and offline, but it will do so relatively slowly and it should cause no harm.
  3. Somehow communicate with the PV system and "tell" it to produce only the needed amount of energy.  This is a bit of a fine line to walk, but it is theoretically possible provided such a feature is available on the PV system.
  4. Alter the conditions of the power being produced by the back-up power system inverter such that it causes the PV system to go offline and stay that way until it needs to come back online.
Analyzing the possibilities:

Let's eliminate #1 as that will not apply to a typical grid-tie system, so that leaves us with:

#2:  Disable the PV system:

Of these three possibilities #2 would seem to be the most obvious and it could be done simply by having another switch on the output of the PV system that disconnects it from the rest of the house, forcing it to go offline - but this has its complications.

For example, in my system the PV is connected into a separate sub-panel located in the garage:  If one were to disconnect this branch circuit entirely, the power in the garage would go on and off, depending on the state-of-charge of the PowerWall or other battery-based back-up power system.  Connecting a PV system to a sub-panel is not an unusual configuration as it is not uncommon to find them connected to sub-panels that feed other systems, say, the air conditioner, kitchen, etc. (e.g. wherever a suitable circuit is available) so I'm guessing that they do not do it this way - unless they do it at the point before the PV system connects to the panel.

As noted above, one would disable the PV system once the battery had fully charged but enable it again once the battery had run down a bit - say, to 90-95%.  This way, one would not be rapid-cycling the PV system and the vast majority of the back-up power system's battery storage capacity would be available.

While this, too, should work, I suspect that it is not the one that is used as the drawings in the brochures don't show any such connection - but then again, they don't show the main house disconnect that would have to be present - but it would probably work just fine if the PV system were to gracefully come back online when it was time to do so (e.g. no user intervention to "reset" anything.)

#4:  Alter the operating conditions to cause the PV system to go offline:

Then there is #4, and one interesting possibility comes to mind - and it is a kludge, but it should work.

One of the parameters that could be altered would be the frequency at which the back-up power system's inverter operates (say, 2-3 Hz or so above and/or below the proper line frequency) and force the PV system offline with that variance.  Even though this minor frequency change is not likely to hurt anything (many generators' frequencies drift around much more than this with varying loads!) things that use the power line frequency as a reference - such as clocks - would drift rather badly unless the frequency were "dithered" above and below the proper frequency so that its long term average was properly maintained.

I suspect that this is not a method that would be used, but it could work - at least in theory.

Edit - 20170719:
In digging around, I have determined that "dithering" the frequency is, in fact, one of several ways that is used by an battery-backed inverter to disable a PV inverter when the PV is producing more power than can be accommodated by the load and/or battery charger.  This system, called "Frequency Shift Power Control" (FSPC) by at least one manufacturer (e.g. SunnyBoy) is designed to do this very thing.

A description of this technique may be found in section 6 of the document at this link.

Whether or not this is one of the control methods used by the Power Wall is not known at this time.
#3:  "Talk" to the PV system and control the amount of power that it is producing:

That leaves us with #3:  Communicate with the PV system and "tell" it (perhaps using the "ModBus" interface) to produce only enough power to "zero" out the net usage.

The problem with this method is that it would depend on the capabilities of the PV inverter system and require that they support such specific remote control functions.  While it is very possible that some do, this method would be limited to those so-equipped.

#3 and #2:  "Talk" to the PV system to turn it on and off as needed:

Included in #3 could be a variant of method #2 and that would be to send a command to the inverter via its network connection to simply shut down and come back online as needed to keep the battery between, say, 90% and 100% charge as mentioned above.

This second variant of #3 seems most likely as there it is likely that there is some sort of set of commands capable of this that would be widely implemented across vendors and models.

* * *

What do I think the likelihood to be?

I'm betting on the second variant of #3 where a command is sent to the PV system to tell it to turn off - at least until there is, again, somewhere to "send" excess power.

* * *

Having said all of this, there is a product FAQ that was put out by Tesla that seems to confirm the basic analysis - that is, its ability to run "stand alone" in the event of a power failure and the charge be maintained if there is sufficient excess PV capacity - read that FAQ here - LINK.

Additional information may be found on the GreenTech Media web site:  "The New Tesla Powerwall Is Actually Two Different Products" - LINK.  This article and follow-up comments seem to indicate that there were, at that time, only a few manufacturers of inverters, namely SolarEdge and SMA (a.k.a. SunnyBoy) with which Tesla was installing/interfacing their systems, perhaps indicating some version of #2 or #3, above.  Clearly, the comments, mostly from several months ago, are also offering various conjectures on how the system actually works.

I'm investigating getting a PowerWall 2 system to augment my PV generation and provide "whole house" backup and have been researching how it works.

While I have occasionally asked questions of representatives of  Tesla, nothing that they have said is anything that could not be easily found in publicly-released information on the internet and as of the original date of this posting I haven't signed anything that could possibly keep me from talking about it.

However it is done, it should be interesting!

* * *


Finally, if you can find more specific information - say from a public document or from others' experience and analysis that can add more to this, please pass it along!


[End]

This page stolen from "ka7oei.blogspot.com"

Saturday, April 29, 2017

An RV "Generator Start Battery" regulator/controller for use with a LiFePO4 power system

I was recently retrofitting my brother's RV's electrical system with LiFePO4 batteries (ReLi3on RB-100's).  This retrofit was done to allow much greater "run time" at higher power loads and to increase the amount of energy storage for the solar electric system while not adding much weight, not needing to vent corrosive fumes.  (These types of batteries are very safe - e.g. they don't burst into flame if damaged or abused.)

While I was doing this, I began to wonder what to do about the generator "start" battery.

Charging LiFePO4 batteries in an RV

The voltage requirements for "12 volt" Lead-Acid batteries are a bit different from those needed by LiFePO4 "12 volt" batteries:
  • Lead acid batteries need to be kept at 13.2-13.6 volts as much as possible to prolong their life (e.g. maintained at "full charge" to prevent sulfation).
  • LiFePO4  batteries may be floated anywhere between 12.0 and their "full charge" voltage of around 14.6 volts.
  • Routinely discharging lead-acid batteries below 50% can impact their longevity - and they must be recharged immediately to prevent long-term damage.
  • LiFePO4  batteries may be discharged to at least 90% routinely - and they may be left there, provided their voltage is not allowed to go too low.
  • Lead acid batteries may be used without any management hardware:  Maintaining a proper voltage is enough to ensure a reasonable lifetime.
  • LiFePO4  batteries must have some sort of battery management hardware to protect against overcharge and over-discharge as well as to assure proper cell equalization.  Many modern LiFePO4 batteries (such as the "Rel3ion" devices used here) have such devices built in.
  • Conventional RV power "converters" are designed to apply the proper voltage to maintain lead-acid batteries (e.g. maintain at 13.6 volts.)
  • Because LiFePO4 batteries require as much as 14.6 volts to attain 100% charge (a reasonable charge may be obtained at "only" 14.2 volts) connecting them directly to an existing RV with this lower voltage means that they may never be fully-charged! 
  • Modern, programmable chargers (e.g. inverter-chargers, solar charge controllers) have either "LiFePO4 " modes or "custom" settings that may be configured to accommodate  the needs of LiFePO4 batteries.  While the lower voltage (nominal 13.6 volts) will not hurt the LiFePO4 batteries, they likely cannot be charged to more than 40-75% of their rated capacity at that voltage.  (approx. 13.6-13.7 volts is the lowest voltage were one can "mostly" charge a LiFePO4 battery.)
  • Because of Peukert's law, one can only expect 25-50% of the capacity of a lead-acid battery to be available at high amperage (e.g. 0.5C or higher) loads.
  • With LiFePO4 batteries, more than 80% of the battery's capacity can be expected to be available at similar, high-amperage.  What this means is that at such high loads, a LiFePO4 battery can supply about twice the overall power when compared with a lead-acid battery of the same amp-hour rating.  At low-current loads the two types of batteries are more similar in their available capacity.
In short:  Unless an existing charging system can be "tweaked" for different voltages and charging conditions, one designed for lead-acid batteries may not work well for LiFePO4 batteries.  In some cases it may be possible to set certain "equalize" and "absorption" charge cycle parameters to make them useful with LiFePO4s, but doing this is beyond the scope of this article.
Originally the RV had been equipped with two "Group 24" deep-cycle/start 12 volt batteries in parallel (a maximum of, perhaps, 100 amp-hours, total, when brand new, for the pair of "no-name" batteries supplied) to run things like lights, and the pump motors for the water system, jacks and slide-outs and as the "start" battery for the generator.  Ultimately we decided to wire everything but the generator starter to the main LiFePO4 battery bank.

Why?

Suppose that one is boondocking (e.g. "camping" away from any source of commercial power) and the LiFePO4 battery bank is inadvertently run down. As they are designed to do, LiFePO4 battery systems will unceremoniously disconnect themselves from the load when their charge is depleted to prevent permanent damage, automatically resetting once charging begins.
 
If that were to happen - and the generator's starter was connected to the LiFePO4 system - how would one start the generator?

Aside from backing up the towing vehicle (if available), connecting its umbilical and using it to charge the system just enough to be able to get the generator started, one would be "stuck", unable to recharge the battery.  What's worse is that even if solar power is available, many charge controllers will go offline if they "see" that the battery is at zero volts (e.g. when they are in that "disconnected" state) - even if the sun is shining, preventing charging from even starting in the first place!

What we needed was a device that would allow the starting battery be be charged from the main battery, but prevent it from back-feeding and being discharged.


Note:
It is common in many RVs for the generator to not charge its own starting battery directly, via an alternator.  The reason for this is that it is assumed by the makers of the generators and RVs that the starting battery will be charged by the towing vehicle and/or via the RV's electrical system via its AC-powered "voltage converter", powered from "shore" power or via the generator's AC output.
But first, a few weasel words:
  • Attempt to construct/wire any of the circuits only if you are thoroughly familiar with electronics and construction techniques.
  • While the voltages involved are low, there is still some risk of dangerous electric shock.
  • With battery-based systems extremely high currents can present themselves - perhaps hundreds or even thousands of amps - should a fault occur.  It is up to the would-be builder/installer of the circuits described on this page - or anyone doing any RV/vehicle wiring - to properly size conductors for the expected currents and provide appropriate fusing/current limiting wherever and whenever needed.  If you are not familiar with such things, please seek the help of someone who is familiar before doing any wiring/modifications/connections!
  • This information is presented in good faith and I do not claim to be an expert on the subject of RV power systems, solar power systems, battery charging or anything else.
  • You must do due diligence to determine if the information presented here is appropriate for your situation and purpose.
  • YOU are solely responsible for any action, damage, loss or injury that might occur.  You have been warned! 
Why a "battery isolator" can't be used:

If you are familiar with such things you might already be saying "A device like this already exists - it's called a 'battery isolator'" - and you'd be mostly right - but we can't really use one of these devices because LiFePO4 batteries operate at a full-charge voltage of between 14.2 and 14.6 volts, and the battery isolator would pass this voltage through, unchanged.  If you apply 14+ volts to a "12 volt" lead-acid battery for more than a few days, you will likely boil the away electrolyte and ruin it!

What is needed is a device that will:
  • Charge the generator start battery from the main (LiFePO4 ) battery system
  • Isolate it from the main battery, and 
  • Regulate the voltage down to something that the lead-acid chemistry can take - say, somewhere around 13.2-13.6 volts.
In this case the main LiFePO4 battery bank will be maintained via the AC-powered (generator or shore) charging system and/or the solar power converters at its normal float voltage, so it makes sense to use it to keep the start battery fully-charged.

The solution:

After perusing the GoogleWeb I determined that there was no ready-made, off-the-shelf device that would do the trick, so I considered some alternatives that I could construct myself.

Note:  The described solutions are appropriate only where the main LiFePO4 bank's voltage is just a bit higher (a few volts) than the lead-acid starting battery:  They are NOT appropriate for cases where a main battery bank of a much higher voltage (e.g. 24, 48 volts, etc.) is being used to charge a "12 volt" starting battery.

Simplest:  "Dropper diodes":

Because we need to get from the nominal 14.2-14.6 volts of the LiFePO4 system down to 13.2-13.7 volts it is possible to use just two silicon diodes in series, each contributing around 0.6 volts drop (for a total drop of "about" 1.2 volts) to charge the starting battery, as depicted in Figure 1, below.  By virtue of the diodes' allowing current flow in just one direction, this circuit would also offer isolation, preventing the generator's battery from being discharged by back-feeding into the main battery.

To avoid needing to use some very large (50-100 amp) diodes and heavy wire to handle the current flow that would occur when the starter motor was active - or if the start battery was charging heavily - one simply inserts some series resistance to limit the current to a few amps.  Even though this would slow the charging rate somewhat, the starting battery would be fully recharged within a few hours or days at most - not a problem considering the rather intermittent use of the starting battery - more about that later.
Figure 1.
This circuit uses a conventional "1157" tail/turn signal bulb (NOT an LED replacement!) with both filaments tied together, providing more versatile current limiting.  Please read notes in the text concerning mounting of the light bulb.
The diodes (D1 and D2) should be "normal" silicon diodes rather than "Shottky" types as it is the 0.6 volt voltage drop per diode that we need to reduce the voltage from the LiFePO4 stack to something "safe" for lead-acid chemistry.  If one wished to "tweak" the voltage on the starting battery, one could eliminate one diode or even replace just one of them with a Shottky diode to increase the lead-acid voltage by around 0.2-0.3 volts.
The use of current-limiting devices allows lighter-gauge wire to be used to connect the two battery systems together.
Click on the image for a larger version.

In lieu of a large power resistor, the ubiquitous "1157" turn signal/brake bulb is used as depicted in Figure 1.  Both filaments are tied together (the bulb's bayonet base being the common tie point) providing a "cold filament" resistance of 0.25-0.5 ohms or so, increasing to 4-6 ohms if a full 12 volts were placed across it.  The reason for the use of a light bulb will be discussed later.

Although not depicted in Figure 1, common sense dictates that appropriate fusing is required on one or both of the wires, particularly if one or more of the connecting wires is quite long, in which case the fuse would be placed at the "battery" end (either LiFePO4 or starting battery) of the wire(s) to provide protection should a fault occur between that source and the charge controller:  Fusing at 5-10 amps is fine for the circuit depicted.

This circuit is "good enough" for average use and as long as the LiFePO4 bank is floated at 14.2 volts with occasional absorption peaks at 14.6 volts, the lead-acid starting battery will live a reasonably long life.

A regulator/limiter circuit:

As I'm wont to do, I decided against the super simple "dropper diode and light bulb" circuit - although it would have worked fine - instead, designing a slightly fancier circuit to do about the same as the above circuit, but have more precise voltage regulation.  While more sophisticated than two diodes and a light bulb, the circuit need not be terribly complicated as seen in Figure 2, below:
Figure 2:
The schematic diagram of the slightly more complicated version that provides tight voltage regulation for the starting battery.  As noted on the diagram, appropriate fusing of the input/output leads should be applied!
This diagram depicts a common ground shared between the main LiFePO4 battery bank and the starting battery, usually via the chassis or "star ground" connection.  In the as-built prototype, Q2 was an SUP75P03-07 P-channel power MOSFET while D1 was an MR750 5 amp, 50 volt diode. A circuit board is not available at this time.
NOT SHOWN is the fusing of the input and output leads, near-ish their respective batteries/source connections, with 10 amp automotive fuses.
Click on the image for a larger version.

How it works:

U1 is the ubiquitous TL431 "programmable" Zener diode.  If the "reference" terminal (connected to the wiper of R5) of this device goes above 2.5 volts, its cathode voltage gets dragged down toward the anode voltage (e.g. the device turns "on").  Because R4, R5 and R6 form an voltage divider, adjustable using 10-turn trimmer potentiometer R5, the desired battery float voltage may be scaled down to the 2.5 volt threshold required by U1.

If the battery voltage is below the pre-set threshold (e.g. U1 is "seeing" less than 2.5 volts through the R4/R5/R6 voltage divider) U1 will be turned off and its cathode will be pulled up by R2.  When this happens Q1 is biased on, pulling the gate of P-channel FET Q2 toward ground, turning it on, allowing current to flow from the LiFePO4 system, through diode D1 and light bulb "Bulb1" and into the starting battery.

By placing R1 and R2 on the "source" side of FET Q2, the circuit is guaranteed to have two potential sources of power:  From the main LiFePO4 system, through D1, and from the starting battery via the "backwards" intrinsic diode inside Q2.  The 15 volt Zener diode (D2) protects the FET's gate from voltage transients that can occur on the electrical system.
Figure 3:
The completed circuit, not including the light bulb, wired on a small
piece of perforated prototype board.
A printed circuit board version is not available at this time.
Click on the image for a larger version.

Once the starting battery has attained and exceeded the desired float voltage set by R5 (typically around 13.5 volts for a "12 volt" lead-acid battery) U1's reference input "sees" more than 2.5 volts and turns on, pulling its cathode to ground.  When this happens the voltage at the base of Q1 drops, turning it off and allowing Q2's gate voltage, pulled up to its source by R1, to go high, turning it off and terminating the charge.

Because the cathode-anode voltage across U1 when it is "on" is between 1 and 2 volts it is necessary to put an additional voltage drop in the emitter lead of Q1, hence the presence of LED1 which offsets it by 1.8-2.1 volts.  Without the constant voltage drop caused by this LED, Q1 would always stay "on" regardless of the state of U1.  Capacitor C1, connected between the "reference" and the cathode pins of U1 prevent instability and oscillation.

In actuality this circuit linearly "regulates" the voltage to the value set by R5 via closed loop feedback rather than simply switching on and off to maintain the voltage.  What this means is that between Q2 and the light bulb, the voltage will remain constant at the setting of R5, provided that the input voltage from the LiFePO4 system is at least one "diode drop" (approx. 0.6 volts) above that voltage.  For example, if the output voltage is set to 13.50 volts via R5, this output will remain at that voltage, provided that the input voltage is 14.1 volts (e.g. 13.5 volts plus the 0.6 volts drop of diode D1) or higher.

Because Q2, even when off, will have a current path from the starting battery to the main LiFePO4 bank due it its intrinsic diode, D1 is required to provide isolation between the higher-voltage LiFePO4 "main" battery bank and the starting battery to prevent a current back-feed.  Were this isolation not included, if the main battery bank were to be over-discharged, current would flow backwards, through FET Q2, from the generator starting battery and discharging it, possibly to the point where the generator could not be started.

Again, D1's 0.6 volt (nominal) drop is inconsequential provided that the LiFePO4 bank is at least 0.6 volts above that of the starting battery, but this will occur very frequently if the charge on that bank is properly maintained via generator, solar or shore power charging.  A similar (>= 5 amp) Shottky diode could have been used for D1 to provide a lower (0.2-0.4 volt) drop, but a silicon diode was chosen because it was on hand.

Testing the device:

Assuming that it is wired/built correctly, connect a variable power supply to the input lead to simulate the LiFePO4 battery bank.  Setting the voltage a volt or two higher than the expected float voltage (e.g. 14.5-16 volts) adjust R5 to attain the desired start battery float voltage (13.50-13.7 volts is recommended - I use 13.55 volts) as measured on either side of "Bulb1".  Adjust the power supply voltage up and down a bit (e.g. below 12 volts and up to 17 volts) and if working correctly, the output voltage from the circuit should be rock-steady as long as the input voltage is about 0.6 volts above the set output voltage.

Now short the output leads (e.g. the "positive" output lead should be going through "Bulb1") and the light bulb should illuminate fully - at least assuming that your variable voltage supply is capable of supplying the 3-ish amps needed for the lamp.  Measuring directly at the circuit board's "ground" (common "battery negative") terminal and at the connection between Q2 and "Bulb1" you should still have the voltage set by R5 within a few hundredths of a volt.

Note:  If you were to measure connect the negative lead of the voltmeter to the power supply or the shorted output leads the measured voltage would be a bit lower owing to voltage drop along the wires.

Shorting the output leads and measuring the voltage as done in the previous step demonstrates two important design points:
  • That the voltage at the output of Q2 remains steady from no-load to maximum current conditions.
  • That the light bulb is properly acting as a current limiting device.
While doing this "short circuit" test, make sure that the heat from the light bulb rises away from the circuit board itself and that the means of mounting it is capable of withstanding the bulb's heat without burning or melting anything.

Connecting the device:

On the diagram only a single "Battery negative" connection is shown and this connection is to be made only at the starting battery.  Because this circuit is intended specifically to charge the starting battery, both the positive and negative connections should be made directly to it as that is really the only place where we should be measuring its voltage!

Also noted on the diagram is the assumption that both the "main" (LiFePO4 ) battery and the starting battery share a common ground, typically via a common chassis ("star") ground point which is how the negative side of the starting battery ultimately gets connected to the negative side of the main LiFePO4 bank:  It would be rare to find an RV with two battery systems of similar voltages where this was not the case!

Finally, it should go without saying that appropriate fusing be included on the input/output leads that are located "close-ish" to the battery/voltage sources themselves in case one of the leads - or the circuit itself - faults to ground:  Standard automotive ATO-type "blade" fuses in the range of 5-10 amps should suffice.  In order to safely handle the fusing current and to minimize voltage drop while charging the connecting wires to this circuit should be in the range of 10 to 16 AWG with 12-14 AWG being ideal.

What's with the light bulb?
Figure 4:
The circuit  board mounted in an aluminum chassis box along with the
light bulb.  Transistor Q2 is heat-sinked to the box via insulating hardware
and the board mounted using 4-40 screws and aluminum stand-offs.  The light
bulb is mounted to a small terminal lug strips using 16 AWG wire soldered
to the bulb's base and the bottom pins:  A large "blob" of silicone (RTV)
was later added around the terminal strip to provide additional support.
Both the bottom of the box (left side) and the top include holes to allow
the movement of air to help dissipate heat.  Holes were drilled in the back
of the box (after the picture was taken) to allow mounting.
This box is, in this picture, laying on its side:  The light bulb would be
mounted UP so that its heat would rise away from the circuitry via
thermal convection.
Click on the image for a larger version.

The main reason for using a light bulb on the output is to limit the current to a reasonable value via its filament.  When cold, the parallel resistance of the two filaments of the 1157 turn-signal bulb is 0.25-0.5 ohms, but when it is "hot" (e.g. lit to full brilliance) it is 4-6 ohms.  Making use of this property is an easy, "low tech" way to provide both current limiting and circuit protection and, when the filament is cold (e.g. charging battery "mostly" charged), increase the amount of charging current that can flow.  Taking advantage of this changing resistance of a light bulb allows higher charging current that would be practical with an ordinary resistor.


In normal operation the light bulb will not glow - even at relatively high charging current:  It is only if the starting battery were to be deeply discharged and/or failed catastrophically (e.g. shorted out) that the bulb would begin to glow at all and actually dissipate heat.  

Limiting the charging current to just a few amps also allows the use of small-ish (e.g. 5 amp) diodes, but more importantly it allows much thinner and easier-to-manage wire (as small as 16 AWG) to be used since the current can never be very high in normal operation.  Limiting the charging current is just fine for the starting battery due to its very occasional use:  It would take only an hour or two with a charge current of an amp or so to top off the battery after having started a generator on a cold day!

As noted on the diagram and in previous text the light bulb must be mounted such that its operating temperature and heat dissipation at full brilliance will not burn or melt any nearby materials as the glass envelope of the bulb can will easily exceed the boiling temperature of water!  With both the "simple" diode version in Figure 1 and the more complex version in Figure 2 it is recommended that the bulb is mounted above the circuitry to take advantage of air convection to keep the components cool as shown in Figure 4.  If a socket is available for the 1157 bulb, by all means use it, but still heed the warnings about possible amount of heat being produced.

In operation:

When this circuit was first installed, the starting battery was around 12.5 volts after having sat for a week or two (during the retrofit work) without a charging source and having started the generator a half-dozen times.  With the LiFePO4 battery bank varying between 13.0 and 14.6 volts with normal solar-related charge/discharge cycles, it took about 2 days for the start battery to work its way up to 13.2 volts, at which point it was nearly fully charged - and then the voltage quickly shot up to the 13.55 volts as set by R5.  This rather leisurely charge was mostly a result of the LiFePO4 bank spending only brief periods above 13.8 volts.

Even though this doesn't very quickly charge the battery under normal conditions, as we'll see below, this isn't really important.

How much of the starting battery's capacity is being used?

If one were to assume that the generator was set to run once per day and pull 100 amps (a current likely seen on a very cold day!) from the battery for 5 seconds this would represent about 0.03 amp-hours: - This happens to be about the same amount of energy as is contained in a fresh hearing-aid battery!

From this we can see that this "100 amps for 5 seconds" is an average current of just over 1 milliamp (1/1000th of an amp!) when spread across 24 hours - a value likely lower than the self-discharge rate of the battery itself.   By these numbers you can see that it does not take much current at all to sustain a healthy battery that is used only for starting!

A standard group 24 "deep cycle starting" battery was used since it and its box had come with the RV.  For this particular application - generator starting only - a much smaller battery, such as one used for starting 4x4s or motorcycles, would have sufficed and saved a bit of weight.

The advantage of the group 24 battery is that it, itself, isn't particularly heavy and it is readily available in auto-parts, RV and "big box" stores everywhere.  Because it is used only for starting it need not have been a "deep cycle" type, but rather a normal "car" battery - although the use of something other than an RV-type battery would have necessitated re-working the battery connections as RV batteries have handy nut/bolt posts to which connections may be easily made.


Final comments:


There are a few things that this simple circuit will not do, including "equalize" the lead acid battery and compensate for temperature - but this isn't terribly important, overall in this application.


Concerning equalization:

Even if the battery is of the type that can be equalized (many sealed batteries, including "AGM" types - those mistakenly called "gel cells" - should never be equalized!) it should be remembered that it is not the lack of equalization that usually kills batteries, but rather neglect:  Allowing them to sit for any significant length of time without keeping them floated to above 2.17 volts/cell (e.g. above 13.0 volts for a "12 volt" battery) or, if they are the sort that need to be "watered" and not keeping their electrolyte levels maintained.  Failure to do either of these will surely result in irreversible damage to the battery over time!

It is also common to adapt the float voltage to the ambient temperature, but even this is not necessary as long as a "reasonable" float voltage is maintained - preferably one where water loss is minimized over the entire expected temperature range.  Again, it is more likely to be failure of elementary battery maintenance that will kill a battery prematurely than a minor detail such as this!

Practically speaking, if one "only" maintains a proper float voltage and keeps them "watered" the starting battery will likely last for at least the 3-5 years expected lifetime, particularly since, unlike battery in standard RV service, this starting battery will never be subjected to deep discharge cycles which can really take a toll on a lead-acid battery.  While an inexpensive, no-name "group 24" battery, when new, may have a capacity of "about" 50 amp-hours, it won't be until the battery has badly degraded - probably to the 5-10 amp-hour range - where one will even begin to notice starting difficulties.

Important also is the fact that the starting battery in this RV is connected to part of the main LiFePO4's battery monitoring system (in this case a Bogart Engineering TM-2030-RV).  While this system's main purpose is to keep track of the amount of energy going into and out of the main LiFePO4 battery, it also has a "Battery #2" input connection where one can check the starting battery's voltage - always a good thing to do at least once every day or two when one is "out and about".

Finally, considering the very modest requirements for a battery that is used only for starting the generator, it would take only a very small (1-5 watt) solar panel (plus regulator!) to maintain it.  While this was considered, it would have required that such a solar panel be mounted, wires run from it to the battery (not always easy to do on an RV!) and everything be waterproofed.  Because the connections to the main battery bank were already nearby, it was pretty easy to use this circuit, instead.

[End]

This page was stolen from "ka7oei.blogspot.com"