Wednesday, May 17, 2017

Teasing out the differences between the "AC" and "DC" versions of the Tesla PowerWall 2

Being naturally interested in such things, I've been following the announcements and information about the Tesla PowerWall 2 - the follow-on product of the (rarely seen - in the U.S., at least) "original" PowerWall.

Somewhat interestingly/frustratingly, clear, concise (and even vaguely) technical information on either version of the PowerWall 2 (yes, there are two versions - the "DC" and "AC") has been a bit difficult to find, so in my research, what have I found?

This page or its contents are not intended to promote any of the products mentioned nor should it be considered to be an authoritative source.

It is simply a statement of opinion, conjecture and curiosity based on the information publicly available at the time of the original posting.

It is certain that as time goes on that information referenced on this page may be officially verified, become commonplace, or proven to be completely wrong.

Such is the nature of life!

The "DC" PowerWall 2:
  • Data sheets (two whole pages, each - almost!) for both the DC and AC versions of the PowerWall may be found here at this link - link.
Unless you have a "hybrid" solar inverter, this one is NOT for you - and if you had such an inverter, you'd likely already know it.  A "hybrid" inverter is one that is specifically designed to pass some of the energy from the PV array (solar panels) into storage, such as a battery and used that stored energy later.

Unlike its "AC" counterpart (more on this later) this version of the PowerWall 2 does NOT appear to have an AC (mains) connection of any type - let alone an inverter (neither are mentioned in the brochure) - but rather it is an energy back-up for the solar panels on the DC input(s) of the hybrid inverter.   "Excess" power from the panels may used to charge the battery and this stored energy could be used to feed the inverter when the load (e.g. house) exceeds that available from the panels - when it is cloudy, if there is a period in which the load exceeds the output of the PV array for a period of time or there is no sun at all (e.g. night).

Whether or not this version of the PowerWall can actually be (indirectly) charged via the AC mains (e.g.  via a hybrid inverter capable of working "backwards" to produce AC from the mains) would appear to depend entirely on the capability and configuration of the hybrid converter and the system overall.

But, you might ask,why would you ever want to charge the battery from the utility rather than from solar?  You might want to do this if there were variable tariffs in your area - say, $0.30/kWh during the peak hours in the day, but only $0.15kWh at night - in which case it would make sense supplant the "expensive" power during the day with "cheap" power bought at night to charge it up.

Whether or not this system would be helpful in a power outage is also dependent on the nature of the inverter to which it is connected:  Most grid-tie converters become useless when the mains power disappears (e.g. cannot produce any power for the consumer - more on this later) - and this applies to both "series string" (e.g. a large inverter fed by high-voltage DC from a series of panels) and the "microinverter" (small inverters at each of the panels) topologies.  Inverters configured for "island" operation (e.g. "free running" in the absence of a live power grid) or ones that can safely switch between "grid tie" and "island" modes would seem to be appropriate if you use the DC PowerWall and you want to keep your house "powered up" when there is a grid failure.

The "AC" PowerWall 2:
  • Data sheets (two whole pages, each - almost!) for both the DC and AC versions of the PowerWall may be found here - LINK.
While the "AC" version seems to have the same battery storage capacity as the "DC" version (e.g. approx. 13.5kWh) it also has an integrated inverter and charger that interfaces with the AC mains that is apparently capable of supporting any standard voltage from 100 to 277 volts, 50 or 60 Hz, split or single phase.  This inverter, rated for approximately 7kW peak and 5-ish kW continuous, is sufficient to run many households.  Multiple units may be "stacked" (e.g. connected in parallel-type configuration - up to nine of them, according to the data sheet linked above) for additional storage and capacity.
Unlike the "DC" version, all of the power inflow/outflow is via the AC power feed, which is to say, it will both output AC power via its inverter and charge its battery via that same connection.  What this means is that it need not (and cannot, really) be directly connect to the PV (photovoltaic) system at all except, possibly, via a local network to gather stats and do controlling.  What seems clear is that this version has some means of monitoring the net flow in to and out of the house to the utility which means that the PowerWall could balance this out by "knowing" how much power it could use to charge its battery, or needed to output.

(The basic diagram of Figure 1, above, shows how such a system might be connected.  This diagram does not specifically represent a PowerWall, but rather how any battery-based inverter/charger system might be used to supply back-up power to a home in the past and future.)

Because its power would be connected "indirectly" via AC power connections to the PV system it should (in theory) work with either a series-string or microinverter-type system - or, maybe even if you have no solar at all if you simply want to charge it during times of lower tariffs and pull the charge back out again during high tariffs.  (The Tesla brochure simply says "Support for wide range of usage scenarios" under the heading "Operating Modes" - which could be interpreted many ways, but I have not actually seen an "official" suggestion of a use without any sort of solar.)

What might such a system look like - schematically, at least?

How might this version of the PowerWall operate?  First, let's take a look at a diagram of how any sort of battery/inverter/charger like this might be configured for a house.
Figure 1:
Diagram of a generic battery-based "whole house" backup system based on obvious requirements.  This is a very basic diagram, showing most of the needed components that would be required to interface a battery-based inverter/charger with a typical house's electrical system and a PV (PhotoVoltaic/solar) charging system.
For those not familiar with North American power systems, typical residences are fed with 240 volt, center-tapped service with the center-tap grounded at the service entrance.  This allows most devices to operate at 120 volts while those that consume large amounts of power (ranges, electric water heaters, electric dryers, air conditioners, etc.) are connected a circuit with 240 volts, which may or may not need the "neutral" lead at all.  In most other parts of the world there would be only "L1" and the "Neutral" operating at about 240 volts.
Click on the image for a larger version.

Referring to Figure 1, above:

Shown to the right of center is a switch that opens when the utility's power grid goes offline, isolating the house and the inverter/charger from the power grid and included in that is a voltage monitor (consisting of potential transducers, or "PT"s) that can detect when the mains voltage has returned and stabilized and it is "safe" to reconnect to the grid.  The battery-based inverter/charger is connected across the house's mains so that it can both pull current from it to charge its battery as well as push power into the house in a back-up situation.

The "Net current monitoring" current transducers ("CT"s) might be used to allow the inverter/charger to "zero out" the total current (and, thus power) coming in from and going out to the power grid (under normal situations) such as when its battery is being charged and extra power is being produced by the PV system, but to control the charge rate just so that only that "extra" power from the PV system is being used to assure, as much as possible, a net-zero flow to/from the utility.  The "House Current monitoring" is used to determine how much current is being used by the entire house while the "PV current monitoring" is used to determine the contribution of the PV system.

By knowing these things it is possible to determine how much excess/deficit their may be in terms of the production of the PV system with respect to actual usage by the household.  Not shown is the current monitoring that would, no doubt, be included in the inverter/charger itself.  Some of the shown current monitoring points may be redundant as this information could be determined in other ways, but are included for clarity.

Finally, a local network connection is shown for both the inverter/charger so that they may communicate with each other, perhaps for control purposes, as well as communicate via the Internet so that statistics may be monitored and recorded and to allow firmware updates to be issued.

How it might operate in practice:

As can be seen in Figure 1 and determined from the explanation, we can see that the PV is connected to the input/output of the inverter/charger (which could be a PowerWall - or any other similar system) via the house wiring which means that there is a path to the PowerWall to charge its battery, and the same path out of it when it needs to supply power along with means of monitoring power flow.

With a system akin to that depicted in Figure 1, consider these possible scenarios:
  1. Excess power is being produced by the PV system and put back into the grid and the PowerWall's battery is fully-charged. Because the battery is fully-charged there is nowhere to put this extra power so it goes back into the grid, tracked by the "Net Meter" in the same way that it would be without a PowerWall.
  2. Excess power is being produced by the PV system and put back into the grid and the PowerWall's battery is not fully charged.  It will pull the amount of "excess" power that the PV system would normally be putting into the grid and charge its own battery at that same rate resulting in a net-zero amount of power being put into the grid.
  3. More power is being consumed by the user's household than is being produced by the solar array.  Depending on the state-of-charge and configuration of the PowerWall, the power wall may produce enough power to make up for the difference between what the PV system is producing and the user needs.  At night this could (in theory) be 100% of the usage if the system were so-configured.
  4. It would be theoretically possible to configure it so that even if there was no solar, but a higher daytime than nighttime power rate, to charge overnight from the mains and put out power during the day to reduce the power costs overall.
What about a power outage?

All of the above scenarios are to be expected - and they are more-or-less standard offerings for many of the battery-based products of this type - but what if the AC mains go down?  For the rest of this discussion we will ignore the "DC" version of the PowerWall as its capability would rely on the configuration of the user's hybrid inverter and its capabilities/configuration when it comes to supplying backup, "islanded" AC power.

As mentioned before, with a typical PV system - either "series string" (one large inverter) or distributed (e.g. "microinverter") - if the power grid goes offline the PV system becomes useless:  It requires the power grid to be present to both synchronize itself and present an infinite "sink" into which it can always push all of the "extra" power power that it is producing.  Were such units to not dangerous voltages could be "back-fed" into the power grid and be a hazard to anyone who might be trying to repair it.  It is for this reason that all grid-tie inverters are, by law, required to go offline - or, at least, disconnect themselves completely from the power grid during a mains power outage.

The "AC" version of the Tesla PowerWall's system includes a switch that automatically isolates the house from the power grid when there is a power failure.  Once this switch has isolated the house from the grid the inverter built into the PowerWall can supply power to the house - at least as long as its battery lasts.

What about charging the battery during a power outage?

Here is where it seems to get a bit tricky.

If all grid-tie inverter systems go offline when the power grid fails, is it possible to use it to assist, or even charge the PowerWall during a grid failure?  In other words, can you use power from your PV system to recharge the PowerWall's battery or, at the very least, supply at least some of the power to extend its battery run-time?

In corresponding with a company representative - and corroborated by data openly published by Telsa (see the FAQ linked near the bottom of this posting) - the answer would appear to be "yes" - but exactly how this works is not very clear.

Based on rather vague information it would seem to work this way:
  • The power (utility) grid goes down.
    • The user's PV system goes offline with the failure of the grid.
    • The PowerWall's switch opens, isolating the house completely from the grid - aside from the ability to monitor when the power grid comes back up.
    • The inverter in the PowerWall now takes the load of the house.
Were this all that happened, the house would again go dark once the battery in the PowerWall's battery was depleted, but there seems to be more to it than this when a PV system is involved, as in:
  • When the PowerWall's inverter goes online, the PV system again sees what looks like the power grid and comes back online.
    • As it does, the PowerWall monitors the total power consumption and usage and any excess power being produced by the PV system is used to charge its battery.
    • If the PV system is producing less power than is being used, the PowerWall will supply the difference:  Its battery will still be discharged, but at a lower rate.
But now it gets even trickier and a bit more vague.

What is there is extra power being produced by the PV system?

Grid tie system are always expecting the power grid to be an infinite sink of power - but what if, during a power failure, you are producing 5kW of solar energy and your house is using only 2kW:  Where does the extra 3kW of production go if it cannot be infinitely sinked into the utility grid, and how does one keep the PV system from "tripping out" and going off line?

To illustrate the problem, let us bring up a related scenario.

There is a very good reason why owners of grid-tie systems are warned against using it to "assist" a backup generator and using that generator as a substitute for the power grid.  What can happen is this:
  • The AC power goes out and the transfer switch connects the house to the generator.
  • The generator comes online and produces AC power.
  • If the AC power from the generator is stable enough (not all generators produce adequately stable power) the PV system will come back online thinking that the power grid has come back.
  • When the PV system comes back online and produces power, the generator's load decreases:  Most generator's motors will slightly speed up as the load is decreased.
    • When the generator's motor speeds up, the frequency goes high.  When this happens, the PV system will see that as unstable power and will go offline.
    • When the PV system goes off, the power is suddenly dumped on the generator and it is hit with the full load and slows back down.
  •  The cycle repeats, with the PV system and generator "fighting" each other as the PV system continually goes on and offline.
An even worse scenario is this:
  • The AC power goes out, the transfer switch connects the house to the generator.
  • The generator comes online and produces power.
  • The PV system comes up because it "sees" the generator as the power grid, but its producing, say, 5kW but the house is, at the moment, using 2kW.
  • The PV system, because it think that it is connected to the power grid, will try to shove that extra 3kW somewhere, causing one or more of the following to happen:
    • The generator to speed up as power is being "pushed" into it, its frequency wil go high and trip the PV system offline, and/or:
    • If the PV system tries to push more power into the system than there is a place for it to go (e.g. the case, above, where the solar is producing 3kW more than is being used) the voltage will necessarily go up.  Assuming that the generator doesn't "overspeed" and trip-out and the frequency doesn't go up and trip the PV system offline, the PV system will increase the voltage, trying to "push" the extra power into a load where there is nowhere for it to go:
      • As the PV system tries to "push" its excess power into the generator, it will increase the output voltage.  At some point the PV system will trip out on overvoltage, and the same "on-off" cycle mentioned above will occur.
      • It is possible that the excess power from the PV will "motor" the generator (e.g. the input power tries to "spin" the generator/motor) - an extremely bad thing to do which will probably cause it to overheat and eventually be destroyed if this goes un-checked.
      • If it is an "inverter" type generator, it can't be "motored", but the excess power will probably cause the generator's inverter to get stuck in the same "trip out/restart" cycle or simply fault out in an "overload condition - or it might even be damaged/destroyed.
If having extra power from a grid-tie inverter is so difficult to deal with, what could you do with extra power that the PV system might be producing?



What if we have excess power and nowhere to put it?

This seems just fine - but the question that comes to mind is "What does the PV system do when the PowerWall's battery is fully-charged and there is no-where to put extra energy that might be being produced?"  Where we again bring up the situation where our PV system is producing 5kW but we are using only 2kW leaving an extra 3kW to go... where?

The answer to that question is not at all clear, but four possibilities come to mind:
  1. Divert the power elsewhere.  Some people with "island" systems utilize a feature of some solar power systems that indicate when excess power is available and use it to operate a diversion switch to shunt the excess power in an attempt to do something useful like run an electric water heater, pump water or simply produce waste heat with a large resistor bank.  Such features are usually available only on "island" systems (e.g. those that are entirely self-contained and not tied to the power grid) and with large battery banks.
  2. If it is possible, simply disable the PV system for a while and drain, say, 5-10% of the power out of the PowerWall's battery before turning it back on and recharging it.  This will cause the PV system to cycle on and offline, but it will do so relatively slowly and it should cause no harm.
  3. Somehow communicate with the PV system and "tell" it to produce only the needed amount of energy.  This is a bit of a fine line to walk, but it is theoretically possible provided such a feature is available on the PV system.
  4. Alter the conditions of the power being produced by the PowerWall's inverter such that it causes the PV system to go offline and stay that way until it needs to come back online.
Analyzing the possibilities:

Let's eliminate #1 as that will not apply to a typical grid-tie system, so that leaves us with:

#2:  Disable the PV system:

Of these three possibilities #2 would seem to be the most obvious and it could be done simply by having another switch on the output of the PV system that disconnects it from the rest of the house, forcing it to go offline - but this has its complications.

For example, in my system the PV is connected into a separate sub-panel located in the garage:  If one were to disconnect this branch circuit entirely, the power in the garage would go on and off, depending on the state-of-charge of the PowerWall.  Connecting a PV system to a sub-panel is not an unusual configuration as it is not uncommon to find them connected to sub-panels that feed other systems, say, the air conditioner, kitchen, etc. (e.g. wherever a suitable circuit is available) so I'm guessing that they do not do it this way - unless they do it at the point before the PV system connects to the panel.

As noted above, one would disable the PV system once the battery had fully charged but enable it again once the battery had run down a bit - say, to 90-95%.  This way, one would not be rapid-cycling the PV system and the vast majority of the PowerWall's storage capacity would be available.

While this, too, should work, I suspect that it is not the one that is used as the drawings in the brochures don't show any such connection - but then again, they don't show the main house disconnect that would have to be present - but it would probably work just fine if the PV system were to gracefully come back online when it was time to do so (e.g. no user intervention to "reset" anything.)

#4:  Alter the operating conditions to cause the PV system to go offline:

Then there is #4, and one interesting possibility comes to mind - and it is a kludge, but it should work.

One of the parameters that could be altered would be the frequency at which the PowerWall operates (say, 2-3 Hz or so above and/or below the proper line frequency) and force the PV system offline with that variance.  Even though this minor frequency change is not likely to hurt anything (many generators' frequencies drift around much more than this with varying loads!) things that use the power line frequency as a reference - such as clocks - would drift rather badly unless the frequency were "dithered" above and below the proper frequency so that its long term average was properly maintained.

I suspect that this is not a method that would be used, but it could work - at least in theory.

#3:  "Talk" to the PV system and control the amount of power that it is producing:

That leaves us with #3:  Communicate with the PV system and "tell" it (perhaps using the "ModBus" interface) to produce only enough power to "zero" out the net usage.

The problem with this method is that it would depend on the capabilities of the PV inverter system and require that they support such specific remote control functions.  While it is very possible that some do, this method would be limited to those so-equipped.

#3 and #2:  "Talk" to the PV system to turn it on and off as needed:

Included in #3 could be a variant of method #2 and that would be to send a command to the inverter via its network connection to simply shut down and come back online as needed to keep the battery between, say, 90% and 100% charge as mentioned above.

This second variant of #3 seems most likely as there it is likely that there is some sort of set of commands capable of this that would be widely implemented across vendors and models.

* * *

What do I think the likelihood to be?

I'm betting on the second variant of #3 where a command is sent to the PV system to tell it to turn off - at least until there is, again, somewhere to "send" excess power.

* * *

Having said all of this, there is a product FAQ that was put out by Tesla that seems to confirm the basic analysis - that is, its ability to run "stand alone" in the event of a power failure and the charge be maintained if there is sufficient excess PV capacity - read that FAQ here - LINK.

Additional information may be found on the GreenTech Media web site:  "The New Tesla Powerwall Is Actually Two Different Products" - LINK.  This article and follow-up comments seem to indicate that there were, at that time, only a few manufacturers of inverters, namely SolarEdge and SMA (a.k.a. SunnyBoy) with which Tesla was installing/interfacing their systems, perhaps indicating some version of #2 or #3, above.  Clearly, the comments, mostly from several months ago, are also offering various conjectures on how the system actually works.

I'm investigating getting a PowerWall 2 system to augment my PV generation and provide "whole house" backup and have been researching how it works.

While I have occasionally asked questions of representatives of  Tesla, nothing that they have said is anything that could not be easily found in publicly-released information on the internet and as of the original date of this posting I haven't signed anything that could possibly keep me from talking about it.
However it is done, it should be interesting!

* * *


Finally, if you can find more specific information - say from a public document or from others' experience and analysis that can add more to this, please pass it along!


[End]

This page stolen from "ka7oei.blogspot.com"

Saturday, April 29, 2017

An RV "Generator Start Battery" regulator/controller for use with LiFePO4 power system

I was recently retrofitting my brother's RV's electrical system with LiFePO4 batteries (Rel3Ion RB-100's).  This retrofit was done to allow much greater "run time" at higher power loads and to increase the amount of energy storage for the solar electric system while not adding much weight, not needing to vent corrosive fumes.  (These types of batteries are very safe - e.g. they don't burst into flame if damaged or abused).

While I was doing this, I began to wonder what to do about the generator "start" battery.

Charging LiFePO4 batteries in an RV

The voltage requirements for "12 volt" Lead-Acid batteries are a bit different from those needed by LiFePO4 "12 volt" batteries:
  • Lead acid batteries need to be kept at 13.2-13.6 volts as much as possible to prolong their life (e.g. maintained at "full charge" to prevent sulfation).
  • LiFePO4 batteries may be floated anywhere between 12.0 and their "full charge" voltage of around 14.6 volts.
  • Routinely discharging lead-acid batteries below 50% can impact their longevity - and they must be recharged immediately to prevent long-term damage.
  • LiFePO4 batteries may be discharged to at least 90% routinely - and they may be left there, provided their voltage is not allowed to go too low.
  • Lead acid batteries may be used without any management hardware:  Maintaining a proper voltage is enough to ensure a reasonable lifetime.
  • LiFePO4 batteries must have some sort of battery management hardware to protect against overcharge and over-discharge as well as to assure proper cell equalization.  Many modern LiFePO4 batteries (such as the "Rel3ion" devices used here) have such devices built in.
  • Conventional RV power "converters" are designed to apply the proper voltage to maintain lead-acid batteries (e.g. maintain at 13.6 volts.)
  • Because LiFePO4 batteries require as much as 14.6 volts to attain 100% charge (a reasonable charge may be obtained at "only" 14.2 volts) connecting them directly to an existing RV with this lower voltage means that they may never be fully-charged! 
  • Modern, programmable chargers (e.g. inverter-chargers, solar charge controllers) have either "LiFePO4" modes or "custom" settings that may be configured to accommodate  the needs of LiFePO4 batteries.  While the lower voltage (nominal 13.6 volts) will not hurt the LiFePO4 batteries, they likely cannot be charged to more than 40-75% of their rated capacity at that voltage.  (approx. 13.6-13.7 volts is the lowest voltage were one can "mostly" charge a LiFePO4 battery.)
  • Because of Peukert's law, one can only expect 25-50% of the capacity of a lead-acid battery to be available at high amperage (e.g. 0.5C or higher) loads.
  • With LiFePO4 batteries, more than 80% of the battery's capacity can be expected to be available at similar, high-amperage.  What this means is that at such high loads, a LiFePO4 battery can supply about twice the overall power when compared with a lead-acid battery of the same amp-hour rating.  At low-current loads the two types of batteries are more similar in their available capacity.
In short:  Unless an existing charging system can be "tweaked" for different voltages and charging conditions, one designed for lead-acid batteries may not work well for LiFePO4 batteries.  In some cases it may be possible to set certain "equalize" and "absorption" charge cycle parameters to make them useful with LiFePO4s, but doing this is beyond the scope of this article.

Originally the RV had been equipped with two "Group 24" deep-cycle/start 12 volt batteries in parallel (a maximum of, perhaps, 100 amp-hours, total, when brand new, for the pair of "no-name" batteries supplied) to run things like lights, and the pump motors for the water system, jacks and slide-outs and as the "start" battery for the generator.  Ultimately we decided to wire everything but the generator starter to the main LiFePO4 battery bank.

Why?

Suppose that one is boondocking (e.g. "camping" away from any source of commercial power) and the LiFePO4 battery bank is inadvertently run down. As they are designed to do, LiFePO4 battery systems will unceremoniously disconnect themselves from the load when their charge is depleted to prevent permanent damage, automatically resetting once charging begins.
 
If that were to happen - and the generator's starter was connected to the LiFePO4 system - how would one start the generator?

Aside from backing up the towing vehicle (if available), connecting its umbilical and using it to charge the system just enough to be able to get the generator started, one would be "stuck", unable to recharge the battery.  What's worse is that even if solar power is available, many charge controllers will go offline if they "see" that the battery is at zero volts (e.g. when they are in that "disconnected" state) - even if the sun is shining, preventing charging from even starting in the first place!

What we needed was a device that would allow the starting battery be be charged from the main battery, but prevent it from back-feeding and being discharged.


Note:
It is common in many RVs for the generator to not charge its own starting battery directly, via an alternator.  The reason for this is that it is assumed by the makers of the generators and RVs that the starting battery will be charged by the towing vehicle and/or via the RV's electrical system via its AC-powered "voltage converter", powered from "shore" power or via the generator's AC output.
But first, a few weasel words:
  • Attempt to construct/wire any of the circuits only if you are thoroughly familiar with electronics and construction techniques.
  • While the voltages involved are low, there is still some risk of dangerous electric shock.
  • With battery-based systems extremely high currents can present themselves - perhaps hundreds or even thousands of amps - should a fault occur.  It is up to the would-be builder/installer of the circuits described on this page - or anyone doing any RV/vehicle wiring - to properly size conductors for the expected currents and provide appropriate fusing/current limiting wherever and whenever needed.  If you are not familiar with such things, please seek the help of someone who is familiar before doing any wiring/modifications/connections!
  • This information is presented in good faith and I do not claim to be an expert on the subject of RV power systems, solar power systems, battery charging or anything else.
  • You must do due diligence to determine if the information presented here is appropriate for your situation and purpose.
  • YOU are solely responsible for any action, damage, loss or injury that might occur.  You have been warned! 
Why a "battery isolator" can't be used:

If you are familiar with such things you might already be saying "A device like this already exists - it's called a 'battery isolator'" - and you'd be mostly right - but we can't really use one of these devices because LiFePO4 batteries operate at a full-charge voltage of between 14.2 and 14.6 volts, and the battery isolator would pass this voltage through, unchanged:  Apply 14+ volts to a "12 volt" lead-acid battery for more than a few days and you will likely boil the away electrolyte and ruin it!

What is needed is a device that will:
  • Charge the generator start battery from the main (LiFePO4) battery system
  • Isolate it from the main battery, and 
  • Regulate the voltage down to something that the lead-acid chemistry can take - say, somewhere around 13.2-13.6 volts.
In this case the main LiFePO4 battery bank will be maintained via the AC-powered (generator or shore) charging system and/or the solar power converters at its normal float voltage, so it makes sense to use it to keep the start battery fully-charged.

The solution:

After perusing the GoogleWeb I determined that there was no ready-made, off-the-shelf device that would do the trick, so I considered some alternatives that I could construct myself.

Note:  The described solutions are appropriate only where the main LiFePO4 bank's voltage is just a bit higher (a few volts) than the lead-acid starting battery:  They are NOT appropriate for cases where a main battery bank of a much higher voltage (e.g. 24, 48 volts, etc.) is being used to charge a "12 volt" starting battery.

Simplest:  "Dropper diodes":

Because we need to get from the nominal 14.2-14.6 volts of the LiFePO4 system down to 13.2-13.7 volts it is possible to use just two silicon diodes in series, each contributing around 0.6 volts drop (for a total drop of "about" 1.2 volts) to charge the starting battery, as depicted in Figure 1, below.  By virtue of the diodes' allowing current flow in just one direction, this circuit would also offer isolation, preventing the generator's battery from being discharged by back-feeding into the main battery.

To avoid needing to use some very large (50-100 amp) diodes and heavy wire to handle the current flow that would occur when the starter motor was active - or if the start battery was charging heavily - one simply inserts some series resistance to limit the current to a few amps.  Even though this would slow the charging rate somewhat, the starting battery would be fully recharged within a few hours or days at most - not a problem considering the rather intermittent use of the starting battery - more about that later.
Figure 1.
This circuit uses a conventional "1157" tail/turn signal bulb (NOT an LED replacement!) with both filaments tied together, providing more versatile current limiting.  Please read notes in the text concerning mounting of the light bulb.
The diodes (D1 and D2) should be "normal" silicon diodes rather than "Shottky" types as it is the 0.6 volt voltage drop per diode that we need to reduce the voltage from the LiFePO4 stack to something "safe" for lead-acid chemistry.  If one wished to "tweak" the voltage on the starting battery, one could eliminate one diode or even replace just one of them with a Shottky diode to increase the lead-acid voltage by around 0.2-0.3 volts.
The use of current-limiting devices allows lighter-gauge wire to be used to connect the two battery systems together.
Click on the image for a larger version.

In lieu of a large power resistor, the ubiquitous "1157" turn signal/brake bulb is used as depicted in Figure 1.  Both filaments are tied together (the bulb's bayonet base being the common tie point) providing a "cold filament" resistance of 0.25-0.5 ohms or so, increasing to 4-6 ohms if a full 12 volts were placed across it.  The reason for the use of a light bulb will be discussed later.

Although not depicted in Figure 1, common sense dictates that appropriate fusing is required on one or both of the wires, particularly if one or more of the connecting wires is quite long, in which case the fuse would be placed at the "battery" end (either LiFePO4 or starting battery) of the wire(s) to provide protection should a fault occur between that source and the charge controller:  Fusing at 5-10 amps is fine for the circuit depicted.

This circuit is "good enough" for average use and as long as the LiFePO4 bank is floated at 14.2 volts with occasional absorption peaks at 14.6 volts, the lead-acid starting battery will live a reasonably long life.

A regulator/limiter circuit:

As I'm wont to do, I decided against the super simple "dropper diode and light bulb" circuit - although it would have worked fine - instead, designing a slightly fancier circuit to do about the same as the above circuit, but have more precise voltage regulation.  While more sophisticated than two diodes and a light bulb, the circuit need not be terribly complicated as seen in Figure 2, below:
Figure 2:
The schematic diagram of the slightly more complicated version that provides tight voltage regulation for the starting battery.  As noted on the diagram, appropriate fusing of the input/output leads should be applied!
This diagram depicts a common ground shared between the main LiFePO4 battery bank and the starting battery, usually via the chassis or "star ground" connection.
In the as-built prototype, Q2 was an SUP75P03-07 P-channel power MOSFET while D1 was an MR750 5 amp, 50 volt diode. A circuit board is not available at this time.
NOT SHOWN is the fusing of the input and output leads, near-ish their respective batteries/source connections, with 10 amp automotive fuses.
Click on the image for a larger version.

How it works:

U1 is the ubiquitous TL431 "programmable" Zener diode.  If the "reference" terminal (connected to the wiper of R5) of this device goes above 2.5 volts, its cathode voltage gets dragged down toward the anode voltage (e.g. the device turns "on").  Because R4, R5 and R6 form an voltage divider, adjustable using 10-turn trimmer potentiometer R5, the desired battery float voltage may be scaled down to the 2.5 volt threshold required by U1.

If the battery voltage is below the pre-set threshold (e.g. U1 is "seeing" less than 2.5 volts through the R4/R5/R6 voltage divider) U1 will be turned off and its cathode will be pulled up by R2.  When this happens Q1 is biased on, pulling the gate of P-channel FET Q2 toward ground, turning it on, allowing current to flow from the LiFePO4 system, through diode D1 and light bulb "Bulb1" and into the starting battery.

By placing R1 and R2 on the "source" side of Q2, the circuit is guaranteed to have two sources of power:  From the main LiFePO4 system, through D1, and from the starting battery via the "backwards" intrinsic diode inside Q2.  The 15 volt Zener diode (D2) protects the FET's gate from voltage transients that can occur on the electrical system.
Figure 3:
The completed circuit, not including the light bulb, wired on a small
piece of perforated prototype board.
A printed circuit board version is not available at this time.
Click on the image for a larger version.

Once the starting battery has attained and exceeded the desired float voltage set by R5 (typically around 13.5 volts for a "12 volt" lead-acid battery) U1's reference input "sees" more than 2.5 volts and turns on, pulling its cathode to ground.  When this happens the voltage at the base of Q1 drops, turning it off and allowing Q2's gate voltage, pulled up to its source by R1, to go high, turning it off and terminating the charge.

Because the cathode-anode voltage across U1 when it is "on" is between 1 and 2 volts it is necessary to put a voltage drop in the emitter lead of Q1, hence the presence of LED1 which offsets it by 1.8-2.1 volts.  Without the constant voltage drop caused by this LED, Q1 would always stay "on" regardless of the state of U1.  Capacitor C1, connected between the "reference" and the cathode pins of U1 prevent instability and oscillation.

In actuality this circuit linearly "regulates" the voltage to the value set by R5 via closed loop feedback rather than simply switching on and off to maintain the voltage.  What this means is that between Q2 and the light bulb, the voltage will remain constant at the setting of R5, provided that the input voltage from the LiFePO4 system is at least one "diode drop" (approx. 0.6 volts) above that voltage.  For example, if the output voltage is set to 13.50 volts via R5, this output will remain at that voltage, provided that the input voltage is 14.1 volts (e.g. 13.5 volts plus the 0.6 volts drop of diode D1) or higher.

Because Q2, even when off, will have a current path from the starting battery to the main LiFePO4 bank due it its intrinsic diode, D1 is required to provide isolation between the higher-voltage LiFePO4 "main" battery bank and the starting battery to prevent a current back-feed.  Were this isolation not included, if the main battery bank were to be discharged, current would flow backwards, through FET Q2, from the generator starting battery and discharging it, possibly to the point where the generator could not be started.

Again, D1's 0.6 volt (nominal) drop is inconsequential provided that the LiFePO4 bank is at least 0.6 volts above that of the starting battery, but this will occur very frequently if the charge on that bank is properly maintained via generator, solar or shore power charging.  A similar (>= 5 amp) Shottky diode could have been used for D1 to provide a lower (0.2-0.4 volt) drop, but a silicon diode was chosen because it was on hand.

Testing the device:

Assuming that it is wired/built correctly, connect a variable power supply to the input lead to simulate the LiFePO4 battery bank.  Setting the voltage a volt or two higher than the expected float voltage (e.g. 14.5-16 volts) adjust R5 to attain the desired start battery float voltage (13.50-13.7 volts is recommended - I use 13.55 volts) as measured on either side of "Bulb1".  Adjust the voltage up and down a bit (e.g. below 12 volts and up to 17 volts) and if working correctly, the output voltage should be rock-steady as long as the input voltage is about 0.6 volts above the set output voltage.

Now short the output leads (e.g. the "positive" output lead should be going through "Bulb1") and the light bulb should illuminate fully - at least assuming that your variable voltage supply is capable of supplying at least 3 amps.  Measuring directly at the circuit board's "ground" (common "battery negative") terminal and at the connection between Q2 and "Bulb1" you should still have the voltage set by R5.

Note:  If you were to measure connect the negative lead of the voltmeter to the power supply or the shorted output leads the measured voltage would be a bit lower owing to voltage drop along the wires.

Shorting the output leads and measuring the voltage demonstrates two important design points:
  • That the voltage at the output of Q2 remains steady from no-load to maximum current conditions.
  • That the light bulb is properly acting as a current limiting device.
While doing this "short circuit" test, make sure that the heat from the light bulb rises away from the circuit board itself and that the means of mounting it is capable of withstanding the bulb's heat without burning or melting anything.

Connecting the device:

On the diagram only a single "Battery negative" connection is shown and this connection is to be made only at the starting battery.  Because this circuit is intended specifically to charge the starting battery, both the positive and negative connections should be made directly to it as that is really the only place where we should be measuring its voltage!

Also noted on the diagram is the assumption that both the "main" (LiFePO4) battery and the starting battery share a common ground, typically via a common chassis ("star") ground point which is how the negative side of the starting battery ultimately gets connected to the negative side of the main LiFePO4 bank:  It would be rare to find an RV with two battery systems of similar voltages where this was not the case!

Finally, it should go without saying that appropriate fusing be included on the input/output leads that are located "close-ish" to the battery/voltage sources themselves in case one of the leads - or the circuit itself - faults to ground:  A standard automotive ATO-type "blade" fuse in the range of 5-10 amps should suffice.  In order to safely handle the fusing current, the connecting wires to this circuit should be in the range of 10 to 16 AWG with 12-14 AWG being ideal.

What's with the light bulb?
Figure 4:
The circuit  board mounted in an aluminum chassis box along with the
light bulb.  Transistor Q2 is heat-sinked to the box via insulating hardware
and the board mounted using 4-40 screws and aluminum stand-offs.  The light
bulb is mounted to a small terminal lug strips using 16 AWG wire soldered
to the bulb's base and the bottom pins:  A large "blob" of silicone (RTV)
was later added around the terminal strip to provide additional support.
Both the bottom of the box (left side) and the top include holes to allow
the movement of air to help dissipate heat.  Holes were drilled in the back
of the box (after the picture was taken) to allow mounting.
This box is, in this picture, laying on its side:  The light bulb would be
mounted UP so that its heat would rise away from the circuitry via
thermal convection.
Click on the image for a larger version.

The main reason for using a light bulb on the output is to limit the current to a reasonable value via its filament.  When cold, the parallel resistance of the two filaments of the 1157 turn-signal bulb is 0.25-0.5 ohms, but when it is "hot" (e.g. lit to full brilliance) it is 4-6 ohms.  Making use of this property is an easy, "low tech" way to provide both current limiting and circuit protection and, when the filament is cold, increase the amount of charging current that can flow. 

In normal operation the light bulb will not glow - even at relatively high charging current:  It is only if the starting battery were to be deeply discharged and/or failed catastrophically (e.g. shorted out) that the bulb would begin to glow at all and actually dissipate heat.   Taking advantage of this changing resistance of a light bulb allows higher charging current that would be practical with an ordinary resistor.


Limiting the charging current to just a few amps also allows the use of small-ish (e.g. 5 amp) diodes, but more importantly it allows much thinner and easier-to-manage wire (as small as 16 AWG) to be used since the current can never be very high in normal operation.  Limiting the charging current is just fine for the starting battery due to its very occasional use:  It would take only an hour or two with a charge current of an amp or so to top off the battery after having started a generator on a cold day!

As noted on the diagram and in previous text the light bulb must be mounted such that its operating temperature and heat dissipation at full brilliance will not burn or melt any nearby materials as the glass envelope of the bulb can will easily exceed the boiling temperature of water!  With both the "simple" diode version in Figure 1 and the more complex version in Figure 2 it is recommended that the bulb is mounted above the circuitry to take advantage of air convection to keep the components cool as shown in Figure 4.  If a socket is available for the 1157 bulb, by all means use it, but still heed the warnings about possible amount of heat being produced.

In operation:

When this circuit was first installed, the starting battery was around 12.5 volts after having sat for a week or two (during the retrofit work) without a charging source and having started the generator a half-dozen times.  With the LiFePO4 battery bank varying between 13.0 and 14.6 volts with normal solar-related charge/discharge cycles, it took about 2 days for the start battery to work its way up to 13.2 volts, at which point it was nearly fully charged - and then the voltage quickly shot up to the 13.5 volts as set by R5.  This rather leisurely charge was mostly a result of the LiFePO4 bank spending only brief periods above 13.8 volts.

How much of the starting battery's capacity is being used?

If one were to assume that the generator was set to run once per day and pull 100 amps (a current likely seen on a very cold day!) from the battery for 5 seconds this would represent about 0.03 amp-hours - about the same amount of energy in a hearing-aid battery!

From this we can see that this "100 amps for 5 seconds" is an average current of just over 1 milliamp (1/1000th of an amp!) when spread across 24 hours - a value likely lower than the self-discharge rate of the battery itself.   By these numbers you can see that it does not take much current at all to sustain a healthy battery that is used only for starting!

A standard group 24 "deep cycle starting" battery was used since it and its box had come with the RV.  For this particular application - generator starting only - a much smaller battery, such as one used for starting 4x4s or motorcycles, would have sufficed and saved a bit of weight.

The advantage of the group 24 battery is that it, itself, isn't particularly heavy and it is readily available in auto-parts, RV and many "big box" stores everywhere.  Because it is used only for starting it need not have been a "deep cycle" type, but rather a normal "car" battery - although the use of something other than an RV-type battery would have necessitated re-working the battery connections as RV batteries have handy nut/bolt posts to which connections may be easily made.


Final comments:


There are a few things that this simple circuit will not do, including "equalize" the lead acid battery and compensate for temperature - but this isn't terribly important, overall in this application.


Concerning equalization:

Even if the battery is of the type that can be equalized (many sealed batteries, including "AGM" types - those mistakenly called "gel cells" - should never be equalized!) it should be remembered that it is not the lack of equalization that usually kills batteries, but rather neglect:  Allowing them to sit for any significant length of time without keeping them floated to above 2.17 volts/cell (e.g. above 13.0 volts for a "12 volt" battery) or, if they are the sort that need to be "watered" and not keeping their electrolyte levels maintained.  Failure to do either of these will surely result in irreversible damage to the battery over time!

It is also common to adapt the float voltage to the ambient temperature, but even this is not necessary as long as a "reasonable" float voltage is maintained - preferably one where water loss is minimized over the entire expected temperature range.  Again, it is more likely to be failure of elementary battery maintenance that will kill a battery prematurely than a minor detail such as this!

Practically speaking, if one "only" maintains a proper float voltage and keeps them "watered" the starting battery will likely last for at least the 3-5 years expected lifetime, particularly since, unlike battery in standard RV service, it will never be subjected to deep discharge cycles which can really take a toll on a lead-acid battery.  While an inexpensive, no-name "group 24" battery, when new, may have a capacity of "about" 50 amp-hours, it won't be until the battery has badly degraded - probably to the 5-10 amp-hour range - where one will even begin to notice starting difficulties.

Important also is the fact that the starting battery in this RV is connected to part of the main LiFePO4's battery monitoring system (in this case a Bogart Engineering TM-2030-RV).  While this system's main purpose is to keep track of the amount of energy going into and out of the main LiFePO4 battery, it also has a "Battery #2" input connection where one can check the starting battery's voltage - always a good thing to do at least once every day or two when one is "out and about".

Finally, considering the very modest requirements for a battery that is used only for starting the generator, it would take only a very small (1-5 watt) solar panel (plus regulator!) to maintain it.  While this was considered, it would have required that such a solar panel be mounted, wires run from it to the battery (not always easy to do on an RV!) and everything be waterproofed.  Because the connections to the main battery bank were already nearby, it was pretty easy to use this circuit, instead.

[End]

This page was stolen from "ka7oei.blogspot.com"

Wednesday, April 19, 2017

A daylight-tolerant TIA (TransImpedance Amplifier) optical receiver

While the majority of my past experiments with through-the-air free-space optical (FSO) communications were done at night, for obvious reasons, I had also dabbled in optical communications done during broad daylight, with and without clouds.

The challenge of daylight:

Clearly, the use of the cloak of darkness has tremendous advantages in terms of signal-noise ratio and practically-attainable communications distances, but daylight free-space optical communications has some interesting aspects of its own:
  • It's easier to see what you are doing, since it's daylight!
  • Landmarks are often easier to spot, aiding the aiming.
  • Even in broad daylight, it is possible to provide signaling as an aiming aid, such as a mirror reflecting the sun - assuming that it is sunny.
  • The sun is a tremendous source of thermal noise, causing dilution of the desired signals.
  • Great care must be taken when one wields optics during the day:  Pointing at the sun or a very strong specular reflection - even briefly - can destroy electronics or even set fire to various parts of the lens assembly!
As you might expect the biggest limitation to range is the fact that the sun, with its irradiance of around 1kW/m3 (when sunny) can overwhelm practically any other source:  This is why the earliest "wireless" communications methods often used reflected sunlight, notably the Heliograph, where a mirror was "modulated" with telegraph code or the "Photophone", a wireless audio transmitter using reflected light, an invention of Alexander Graham Bell from the 1870s with earlier roots - a device that Bell himself considered to be his greatest invention.
Figure 1:
Optical communications during daylight.  In the center of this contrast-enhanced picture (the red spot) is the light from an optical transmitter using a 30+ watt LED at a distance of 13.25 miles (21.3km).  This image is from my own "modulatedlight" web site.
Click on the image for a larger version.

Means of detection:

While the modulated speech may be produced in any number of ways (vibrating mirror, high-power LED, LASER) some thought must be given on the subject of how to detect it.  While the detector itself need not be spectacularly sensitive due to the nearly overwhelming presence of the thermal noise from the sun, it is worth making it "reasonably" sensitive so that this is not the limiting factor.  An example of an un-sensitive optical receiver (e.g. one that is rather "deaf" and itself is not likely to be sensitive enough even for daylight use) is a simple circuit using a photodiode as depicted below:
Figure 2:
A simple phototransistor-based receiver (top).  This circuit was built by Ron, K7RJ, simply to demonstrate the ability to convey audio a short distance:  It is (intentionally) not optimized in any way and is not at all sensitive.  A similar, but slightly better circuit was found on the Ramsey Electronics LBC6K "Laser Communicator", which was also quite "deaf".  See the article Using Laser Pointers for Free-Space Optical Communications - link that more thoroughly explains this issue.  This image is from my own "modulatedlight" web site and used with the permission of Ron Jones, K7RJ.
Click on the image for a larger version.

The circuit in the top half of Figure 2 (above) depicts one of the simplest-possible optical receivers - and one of the "deafer" options out there.  In this case a biased phototransistor is simply fed into an LM386 audio amplifier and the signal is amplified some 200-fold (about 46dB.)  As noted in the caption, this was a "quick and dirty" circuit to prove a concept and was, by no means optimized nor does it take maximum advantage of the potential performance of a phototransistor.

As it turns out a phototransistor isn't really the ideal device because it is, by itself, intrinsically noisy.  Another, more practical issue is that its active area is typically quite small which means that it won't intercept much light on its own.  Of course, any half-hearted attempt to use any device for the detection of optical signals over even a rather short distance of a few hundred meters would include the use of a lens in front of the detector - no matter its type:  The lens will easily increase the "capture area" by many hundred-fold (even for a small lens!) and will effect noiseless amplification with the added benefit of rejecting light sources that are off-axis.  With the tiny active area of a phototransistor it can be difficult to properly and precisely focus the distant light onto that area and it is likely that unless very good precision in both alignment and focus can be maintained, the "spot of light" being focused onto the phototransistor will be larger than its active area, "wasting" some of its light as it "spill"s over the sides.

One of the biggest problems with a circuit like this is that there will be a level of light at which the phototransistor saturates, and when this happens the voltage across its collector and emitter will go very close to zero and the received signals will disappear, possibly "un-saturating" only briefly during those points in the modulation where the source light happens to go toward zero, resulting in badly distorted sound.  In broad daylight the phototransistor may be hopelessly saturated at all times unless an optical attenuator (e.g. neutral density filter) is used to reduce the total light level and/or more current is forced through it.

Introducing the TransImpedance Amplifier (TIA):

A much better circuit is the TransImpedance Amplifier, a simple circuit that proportionally converts current to voltage.  With this circuit (see Figure 3) one would more likely use a PIN Photodiode, a device akin to a solar cell, in which the output current is pretty much proportional to the light hitting its active area. This is quite unlike the manner in which a phototransistor is typically used where in the former case the impinging light causes a voltage drop across the device.

Figure 3:
A simple transimpedance amplifier.
(Image from Wikipedia)
In this circuit the junction between the inverting (-) and noninverting (+) inputs of the op-amp "wants" to be zero, so as the current from the photodiode increases in the presence of light, its output voltage will increase, sending a portion of that current through feedback resistor "Rf" until the overall voltage is zero.  What this means is that the output voltage, Vout, is equal to the current in the photodiode multiplied by the magnitude of resistance, Rf - except that the voltage will be negative, since this is an inverting amplifier.

As an example, assume that Rf is set to 1 Megohm.  Assuming no leakage and a "perfect" op-amp we can determine that if there is -1 volt output, we must have 1/1000000th of an amp (e.g. 1 microamp) attributed to Ip, the photodiode current.  This sort of circuit is often used as a radiometric detector - that is one in which its output is directly proportional to the amount of light striking the photodiode' surface, weighted by intervening optics and filters and the spectral response of the detector itself.

For more about the Transimpedance Amplifier circuit, visit the Wikipedia page on the subject - link.

This is OK when the photodiode is in complete darkness - or in near-complete darkness, but what about strong light?  We can see from the above example that if we have just 10 microamps - a perfectly reasonable value for a typical photodiode such as the BPW34 in dim-to-normal room lighting - that Vout would be -10 volts.  If this same circuit were taken outside, the diode current could well be many hundreds or thousands of times that amount and this would "slam" the output of the op amp against a rail.

One of the typical means of counteracting this effect is to capacitively couple the photodiode to the op amp so that only changing currents from a modulated signal get coupled to it, blocking the DC, but there is another circuit that is arguably more effective, depicted in Figure 4, below.

Figure 4:
A "Daylight Tolerant" Transimpedance amplifier circuit.
In this circuit the DC from the output is fed back to "servo" the photodiode's "cold" side so that its "hot" side (that connected to the op amp's inverting input) is always maintained at the same potential as the noninverting input, eliminating the DC offset caused by ambient light.  The disadvantage of this method is that it does not lend itself well to reverse-biasing the photodiode to reduce its capacitance, but between the high intrinsic thermal noise levels of daylight and the related photoconductive shunting of the device due to high ambient light, this is largely unimportant.  For the photodiode the common and inexpensive BPW34 may be used along with many other similar devices.
Click on the image for a larger version.

This circuit is, at its base, the same as that depicted in Figure 3, with a few key differences:
  • An "artificial ground" is established using R101 and R102, allowing the use of a single-polarity power supply.  This artificial ground is coupled to the actual ground via C102 and C103 making it low impedance to all but DC and very low AC frequencies.
  • The voltage output from the transimpedance amplifier section (U101a) is feed back via R104 to the "ground" side of the photodiode (D101) to change its "ground".  If there is a high level of ambient light, the voltage at the "bottom" end of D101 (at D101/C107) goes negative with respect to the artificial ground, setting the DC voltage at the non-inverting input of the op amp to zero, cancelling it out.
  • R104 and C106 form a low-pass filter that passes the DC offset voltage to the bottom of D101, but block the audio.  In this way the DC resulting from ambient light that would "slam" the op amp's output to the negative rail is cancelled out, but the AC (audio) signals remain.  The time constant of this R/C network is slow enough to be "invisible" all but the very lowest (subaudible) frequencies, but more than fast enough to track changes in ambient light.
  • By not placing any additional components between the "hot" end of the photodiode and the op amp, the introduction of additional noise from the components (including microphonic responses of the coupling capacitor) is greatly reduced.
In the above circuit the values of R103 and C104 would be chosen for the specific application.  In a circuit that is to be used at very high light levels where high sensitivity is not very important a typical value of R103 would be 100k to 1 Megohm:  Do not use a carbon composition but a carbon film or (better yet) metal film resistor is preferred for reasons of noise contribution.  While tempting to use, a variable resistor at R103 is also not recommended as these can be a significant source of noise.  If multiple gain ranges are used, small DIP switches, push-on jumpers or even high-quality relays - wired to the circuit - could be used to select different feedback resistances, knowing that these devices have the potential of introducing noise as well as additional stray circuit capacitance.  (Such a relay/switch would be wired on the "output" side of the op amp/relay of the feedback resistance(s).)

C104 is used to compensate for photodiode and other circuit capacitance and without it the high frequency components would rise up (e.g. "peak"), possibly resulting in oscillation and general instability.  Typical values for C104 when using a small-ish photodiode like the BPW34 are 2-10pF:  Using too much capacitance will result in unnecessarily rolling off high frequency components, but will not otherwise cause any problems.  A small trimmer capacitor may be used for C104, either "dialed in" for the desired response and left in permanently or optimally adjusted, measured, and then replaced with an equal-value fixed unit.

Again, the reason why the ultimate in high sensitivity is not required on a "Daylight Tolerant" circuit is that during the daytime, the dominant signal will be due to thermal noise from the sun - a signal source strong enough that it will submerge weak signals, anyway:  It need be sensitive enough only to be able to detect the sun noise during daylight hours.

The op amp noted in Figure 4 is the venerable LM833, a reasonably low-noise amplifier and one that is cheap and readily available (and actually works well down to 7 volts - a bit below its "official" voltage rating allowing the above circuit to powered from a single 9 volt battery) but practically any low-noise op amp could be used:  Somewhat better performance may be obtained using special, low-noise op amps, but these would be "overkill" under daylight conditions.

For nighttime use - where better sensitivity was important - a standard "TIA" amplifier that omits the DC feedback loop to cancel out the DC (potentially noise-contributing) components along with higher values of Rf would offer better performance, but for much better low-noise performance (e.g. 10-20dB better ultimate sensitivity) under low-light conditions than is possible with standard components at audio frequencies in a TIA configuration the "Version 3" optical receiver circuit described on the page "Gate Current in a JFET..." - link is recommended, instead.


Additional web pages on related topics:
The above web pages also contain links to other, related pages on similar subjects.


[End]

This page stolen from "ka7oei.blogspot.com".

Friday, March 31, 2017

A (somewhat convoluted) means of locking a "binary" (2^n Hz) frequency to a 10 MHz reference

DDS (Direct Digital Synthesis) chips are common these days with small boards containing an Analog Devices AD9850 board being available on EvilBay for a cost lower than one is likely able to buy the chip by itself!  While these boards are quite neat, they do have a problem (or quirk) in that you are not likely to be able to generate the exact frequency that you want - at least if it is to be an exact integer of Hz.

Let us take as an example one of those ADS9850 DDS boards available on EvilBay.  These come equipped with a 125 MHz crystal oscillator that will likely be within 10-20 ppm or so, but let us assume that it is exactly 125 MHz.

Other than the 125 MHz clock and some output filtering, the AD9850 DDS chip has nearly everything else that one would need to generate an output from DC to around 60 MHz - the precise limit depending on filtering - and its frequency is set using a 32 bit "tuning word".  The combination of the 125 MHz clock and the 32 bit tuning word means that our frequency resolution is:
  • 125,000,000 / (232) = 125,000,000 / 4,294,967,296 = 0.02910383045673370361328125... Hz per step - approximately.
For most purposes around 1/34th of a Hz resolution would seem to be good enough - and it probably is - but what if you wanted to be able to generate frequencies that were exact multiples of 1 Hz steps for frequency comparison purposes or to be able to generate precise, standard frequencies like 1, 5, 10 MHz, etc. - or even a very precise 1 kHz tone?

The quick answer to this is to pick a clock frequency that is an exact "power of two" Hz, and the closest 2n multiple to 125 MHz is 227 or 134.217728... MHz - slightly beyond the ratings of the AD9850, but it is likely to work.  (Depending on the high frequency requirements, half of this frequency - 226 Hz, or 65.108864 MHz might be used instead:  Other frequencies that are 2n divided by an integer such as 2n/10 are usable, too as an example.)

What does this change in clock frequency gain for us, then?
  • 227 / 232 = 0.03125 Hz per step, which is exactly 1/32nd Hz.
In this way, very precise frequencies that are a multiple of 1 Hz (and a half-Hertz as well) could be produced.

(Where does one get a 134.217728 or 65.108864 MHz oscillator?  This would likely require a custom-made crystal/oscillator or it could be produced using another synthesizer such as an SI5351A that, itself, uses a VCXO.)
Locking the DDS synthesizer to a 10 MHz frequency reference

It would make sense that if you actually needed to be able to set your frequency to exact 1 Hz multiples that you would also need to precisely control the reference frequency as well - likely with a 10 MHz precise reference from a GPS Disciplined Oscillator (GPSDO), a Rubium frequency reference or something similar.  Unfortunately, 227Hz is an awkward number that doesn't easily relate to a 10 MHz reference.

The most obvious way to do this is to use a second DDS generator board (they are cheap enough!) clocked from the same 227Hz source with its output to exactly 10 MHz using a frequency word of 320,000,000d, comparing it to the local standard and applying frequency corrections to the 227Hz oscillator.

There is a less-obvious way to do this as well, so here is an example using 224 Hz:
  • Take the 10 MHz output and divide it by 625 to yield 16.000 kHz
  • Multiply the 16.000 kHz by 32 to yield 512.000 kHz
  • Divide 512 kHz by 125 to yield 4096 Hz
  • Divide any 2n Hz frequency down to 4096 Hz as a basis of comparison
(Depending on one's requirements, the precise method could vary with other frequency combinations possible.  The frequency of 512kHz was used because it was well within the operational range of good, old-fashioned 4000 series CMOS circuitry.)

Why would anyone use this second method?  Back in the 1980s I built a DDS synthesizer that used a 224 Hz reference (16.777216 MHz) that used a 24-bit tuning word to provide precise 1 Hz steps, but I also needed to lock that same synthesizer to a high-quality 10 MHz TCXO.  While it would have been possible to have built another synthesizer, a 1980s solution to this problem meant that an entire synthesizer circuit (or most of it, anyway) consisting of more than a dozen chips - some of them rather expensive - would have have to be replicated to do this one thing.

This seemingly convoluted solution required required only 6 inexpensive chips - a combination of 74HC (or LS-TTL) and some 4000 series CMOS devices.  For example:
  • Dividing the 10 MHz reference by 625:  A 74HC40103 wired as a divide-by-125 followed by a 4017 counter wired as a divide-by-5 to yield 16 kHz.
  • The multiplication of 16 kHz by 32 to 512 kHz:  A 4046 PLL and a 4040 counter to form a synthesizer.
  • Division of 512 kHz to 4096 Hz:  Another 40103 wired as a divide-by-125.
  • Division of 16.777216 MHz down to 4096 Hz:  A 74HC4040 counter dividing by 4096.
The final step to lock the two frequency sources together was to use the venerable 4046 phase detector, outputting the correction voltage to the 16.777216 MHz oscillator.

It's worth noting that because the 4096Hz output from the divide-by-125 from the 512kHz source is a pulse rather than a square wave so it is not possible to use the "XOR" phase detector (Phase detector 1) of the 40406, but rather the flip-flip detector (Phase detector 2).  The "problem" with the flip-flop detector is that when the two frequencies are close, instead of having a constant train of pulses being output that are either at the reference frequency or twice the reference frequency, one will get occasional, brief pulses as the output of one of the flip-flops occasionally drops out of its high-impedance mode.  The problem with is that these occur (more or less) randomly and comparatively rarely, meaning that they they are at a rather low frequency and can get through the loop filter, causing extra jitter on the locked frequency - the 16.777216 MHz oscillator in this case.  The "fix" for this is to slightly bias the output of the phase comparator toward V+ or ground with a high-value resistor (100k-4.7 Meg, depending on the application) which will "pull" the output constantly toward one rail, forcing the loop to be corrected constantly meaning that instead of the occasional, narrow pulse, there will always be a string of pulses at a "high-ish" frequency that can be removed by the loop filter.  With the rather low "loop gain" of this VXCO configuration, "jitter" caused by the multiplication synthesis and divisions really doesn't show up in the 224 Hz crystal oscillator being locked.
With the main 16.777216 MHz reference being a VCXO (Voltage-Controlled Crystal Oscillator) the above scheme worked very well, locking to the 10 MHz reference in under a second.  Back in the 1980s the most accurate frequency reference that I had was a collection of OCXOs (Oven-Controlled Crystal Oscillators) and TCXOs (Temperature-Controlled Crystal Oscillators) with the 10 MHz units being easily referenced to the off-air signal from WWV to provide both an accuracy and stability of around one part in 107 or better.  Because, in our example, we are starting out at a much higher frequency (e.g. 134-ish MHz) we would would divide this down to 4096 Hz using a combination of 74F or 74Axx logic and a (74HC)4040 counter.

(If our 134-ish MHz clock were produced using an SI5351A synthesizer, the PLL corrections in this scheme would be applied to its clock, which typically operates at around 27 MHz.)

Nowadays, with GPSDOs and second-hand rubidium references being affordable, the accuracy and stability can be improved by several orders of magnitude beyond this.

Having said all of this the question must be asked:  Is any of this still useful?

You never know!


[End]

This page stolen from ka7oei.blogspot.com
 

Tuesday, March 21, 2017

The 1J37B as a replacement for a 1L6?

The rarity of the 1L6:

Owners of the classic Zenith Transoceanic radios from the early-mid 50's will probably be aware of the pain involved if they have to buy a 1L6 tube (or "valve") for their beloved radio:  A "good" 1L6 - seemingly the tube that goes bad most often in these radios - can fetch up to $60 today, a significant fraction of what one might have paid for a second-hand radio to restore.

Figure 1:
The original 1L6 - a "not-too-common" tube even during its
heyday.  A "good" one like this is even rarer today!
Click on the image for a larger version.
One of the problems with the 1L6 is that there really aren't any good substitutes since this 30-ish MHz rated tube, a hexode  (a.k.a. "pentagrid converter") wasn't  commonly used in the first place, finding near-exclusive use in higher-priced battery-powered shortwave radios.  One of the few (almost) direct plug-ins that exists is the 1U6, which is apparently rarer than the 1L6 and requires some slight circuit modifications to accommodate its lower (25 mA) filament current.

There are other tubes that will plug in, but these simply don't work on the higher shortwave bands (e.g. the 1R5, intended for AM broadcast band battery portables and not for the higher shortwave frequencies) or, in the case of the European 1AC6, requires a bit of modification and has issues with radio alignment.  There is, of course, the electrically equivalent and comparatively easy-to-find 1LA6, but it's in a completely different form factor (e.g. a loctal tube rather than a 7-pin miniature) and requires either an adapter or a different tube socket.  Finally, there are the solid-state replacement options which are roughly comparable to the cost of a known-good 1L6 and while some work fairly well, they definitely lack that "tube" aura.

What now?

One of the sticking points is that the 1L6 serves both as the local oscillator and frequency converter:  One of the internal grids of this hexode is used as sort of the "plate" of the oscillator while a grid closer to the anode takes the signal from the RF amplifier stage and modulates the electron stream to mix it with the local oscillator to produce the 455 kHz IF - and it does this all with a filament that consumes just 50 milliamps at about 1.25-1.4 volts.  Without significant rewiring, this kind of rules out the use of pretty much any tube other than one that takes just 50 milliamps at 1.4 volts for its filament!

Having established that there really aren't any other 7-pin miniature tubes that are "close enough", what about broadening the scope to include something entirely different?

Figure 2:
The Russian 1Ж37Б "rod" pentode.  Approximately the same diameter
as a ball-point pen, it's overall length, minus leads, is about that
of a 7-pin miniature tube.  This specimen bears an early 1987 date
of manufacture.
Click on the image for a larger version.
This thought came to me at about the time I was first experimenting with some Russian Rod tubes as described in my December 31, 2016 posting, "A simple push-pull amplifier using Russian Rod tubes and power transformers" - link.

While that article discusses the use of a 1Ж18Б (usually translated to "1J18B" or "1Zh18B") pentode, there is another member of that family, the 1Ж37Б (a.k.a. 1J37B or 1Zh37B) that is also a pentode rated for operation to at least 60 MHz.  One property in its favor is that its filament voltage and current are "pretty close" to that of the 1L6:  Anything between 0.9 and 1.4 volts will work and the rated filament current is around 57 milliamps - a tad higher than the 1L6, but something that we can probably live with.

Doing a quick finger-count of the number of elements of a pentode and comparing that with the number of tube elements that one would need to simulate a 1L6 hexode immediately reveals a problem:  How would one use a pentode as a pentagrid converter when we are an element short?

The 1J37B to the rescue?

As it turns out, the 1J37B is a unique animal:  As a result of its construction using metal rods to form and modulate sheets of electrons rather having the grid-like structures of "conventional" tubes, it actually has TWO "first" grids that are pretty much identical - a construct that is often likened to that of a dual-gate MOSFET - but it is more likely to be akin two two FETs in parallel in operation.
Figure 3:
The bottom-view pin-out and the internal diagram of the 1Ж37Б pentode.
Following the original nomenclature, the "grids" are referenced using the "C" designation - somehow appropriate even in English since this tube does not use "grid" structures at all, but control rods to alter the trajectory of sheets of electrons from the cathode.  As noted in the text there are two "first grids" that operate identically and (in theory) may be used separately, interchangeably or even tied together as a single "grid" with higher transconductance.  Because these tubes manipulate sheets of electrons, they are quite sensitive to magnetic fields!
Click on the image for a larger version.

The internal mechanical layout of the 1J37B is also quite interesting in that it is essentially two tubes in parallel, sharing the same cathode, screen "grid" suppressor "grid" and plate connections.  In the middle, the identical sheets of electrons from the cathode go in two directions, each controlled by its very own "C1" control rod (e.g. C1' and C1").  Beyond C1' and C1", the structures of the screen, suppressor and plate elements are physically mirrored and connected together.

In comparing the specifications of the 1L6 and the 1J37B, the important specifications  (e.g. transconductance, capacitance, filament voltage and current) weren't terribly different.  Some of the voltage ratings for the 1J37B - particularly that of the screen, rated for 60 volts maximum - are below that which one would see when used as a 1L6, but those may be dealt with later.


What if we could use one of these two "first grids" and the "screen grid" as the basis of the local oscillator section and simply apply the input signal to be amplified and converted to the other "first grid"?  Because it was more like two tubes in parallel than one tube with multiple control grids I wondered if there was enough isolation to allow both oscillating and signal mixing functions to occur simultaneously.  I was a bit skeptical of this idea, even though I was the one that thought of it (as far as I know.)

I decided to try it.

Making the base
Figure 4:
Using masking tape, a "form" is made to set the shape and position of the
pins the pieces of 18AWG wire poking through two layers of masking
tape to protect the socket.  After dripping in the epoxy, the pins were
moved about to make sure that they were completely surrounded by
epoxy.
Click on the image for a larger version.


Rather than mess with the Zenith TransOceanic for the first attempt at this, a friend of mine (Glen, WA7X) rummaged through his collection of old radios and produced an old Motorola battery/AC radio that used 1 volt tubes - including the 1R5 which is (sort of) "pin compatible" with the 1L6.  Being a broadcast band radio I figured that if the concept was usable at all, the simple, nearly foolproof low-frequency circuits of such a radio would be the place to try it first:  If it worked there, there may be some hope that it would work in the ZTO.

I needed to make a fake tube base, but not having a dud 7-pin miniature tube immediately at hand - and remembering from my past how difficult it is to solder to the "bloody stumps" of the dumel-like wires on the carcass of a deceased tube' base - I set about making one.  I first covered the 7 pin socket in the radio with two layers of masking tape and then poking through this tape and into the socket seven lengths of bare, 17 or 18 AWG copper wire.  A ring of masking tape was then placed around the outside of these pins and some "5-minute" epoxy was dripped into the middle, carefully avoiding the copper "pins":  No doubt a small piece of plastic tubing or a taped-together ring of a sheet of plastic from a discarded "blister pack" would have made a nicer form than a floppy piece of masking tape, but it did the job.
Figure 5:
After the epoxy had started to set up, it was heated to speed up curing.  After
it had adequately set, it was removed from the socket:  Here it is before
the wires were trimmed and tape and excess epoxy were removed.
Click on the image for a larger version.

Working the copper pins back and forth to make sure that they were surrounded with epoxy I allowed the requisite "5 minutes" for the "fast curing" epoxy to (somewhat) set. I then heated the contrivance with an SMD hot-air rework gun on its lowest heat (212F, 100C) for several minutes which immediately caused the epoxy to set hard enough to work once it had again cooled.

Carefully removing the "base" from the socket and peeling away some of the masking tape I trimmed the seven wires underneath to lengths comparable to that of a typical tube and did similar to the top side.  I then had my 7-pin, solderable "tube base".


From this point on the wiring of the 1J37B to the base seemed pretty straightforward..

Wiring it up:

For the initial stab at replicating the function of a 1R5 the 1J37B was wired to the 7-pin base as follows:

1J37B Pin                  [7 pin base connection for the 1R5]
1 - Filament (-)                 [Pins 1]  Filament and suppressor grid
2 - "Grid" 1'                      [Pin 4]  "Oscillator Grid" (G1)
3 - Grid 3 (suppressor)     [Pins 1]  Filament and suppressor grid
4 - Filament (+)                [Pin 7]  Filament
5 - "Grid" 1"                     [Pin 6]  "Signal Grid" (G4)
6 - "Grid" 2 (Screen)        [Pin 3]  "Oscillator plate/grid" (G2)
Plate wire (top)                 [Pin 2]  Plate
----                                

Or, put another way:

7 Pin base connection   for the 1R5    [1J37B Pin connection]
1 - Filament (-) and Suppressor Grid    [1 - Filament (-) and 3 - Suppressor Grid]
2 - Plate                                                 [Top plate wire]
3 - 1L6 "G2"                                         [6 - Screen Grid]
4 - Oscillator Grid (1L6 "G1")              [2 - Grid 1']
5 - No connect (see text)                       N/A
6 - Signal Grid (1L6 "G4")                   [6 - Grid 1"]
7 - Filament (+)                                     [4 - Filament (+)]

Again, note that applying the word "grid" to the 1J37B, while descriptive of the function, is not accurate:  These "grids" operate more as control rods to deflect/direct the sheet of electrons being emitted from the cathode.

For replacing a 1R5:
Figure 6:
Right at home, the completed 7pin miniature tube base in the Motorola
"test" radio in the 1R5's position.
Click on the image for a larger version.


A bit of explanation about pins 1 and 5 is in order at this point.  For the 1L6, pin 5 connects to a pair of grids that surround the "Signal" grid (1L6 pin 6), but on the 1R5 the suppressor grid is internally connected to the "low" side of the filament using pins 1 and 5. Because the 1J37B is a pentode, the suppressor grid must be grounded which means that it would be connected to the filament low side as well.

Whoever made the radio could, in theory, use pin 1 and/or pin 5 for this connection and the mounting of components and there is no real way of knowing without inspecting the socket wiring.  Because of this it would be a good idea to connect both pins together when emulating a 1R5 unless you know for certain how this connection is made in the radio with which you are testing.

For replacing a 1L6:

When using a 1R5 as a "pinch hit" replacement for the 1L6 (it will probably work only up to about 10 MHz) the voltage applied to pin 5, which is nominally at about 85 volts, is effectively shorted to "ground".  In the Zenith TransOceanic H-500 there is a 68k resistor in series with that line which means that the current will be around 1 milliamp or so, dropping the "85 volt" line - also used on the screen of the RF amplifier - by 3-5 volts, an amount likely not high enough to be noticed.  If the intent is to never use this replacement in lieu of a 1R5 we would just leave pin 5 disconnected.

Trying it out as a 1R5:

For testing it out in the "1R5" configuration (e.g. 1R5 pins 1 and 5 connected together) in the Motorola radio I inserted a 10k resistor in series with the anode lead in order to monitor its current, but despite this inserted loss the faux 1R5 worked the first time.  The filament voltage across the 1J37B was 1.0-1.1 volts, well within its operational specifications and indicating that the other tube in series with it across its 3 volt "A" battery (a 1S5) was probably seeing an extra 0.25 volts or so on its filament.
Figure 7:
The first prototype - the 1Ж37Б (a.k.a. 1J37B) wired to the 7-pin miniature
base as a "1R5".  The two 10k parallel resistors and 0.01 capacitor
were inserted into the plateto monitor current.  For this prototype the leads,
insulated with PTFE spaghetti tubing, were intentionally left at their original
length to facilitate rewiring and inserting other components (resistors, capacitors,
etc.) in the circuit during testing.  For a "final" configuration the leads would
be shortened considerably.
Click on the image for a larger version.


There was a minor problem, however:  At some frequencies the radio would start squealing - something that it did not do with the 1R5.  It is possible that there is a failing component in this radio somewhere, or it may also be that this faux 1R5 has enough extra gain to cause circuit instability, or a combination of both.  Despite this minor quirk, the results were encouraging as it is usually easier to dispose of extra gain than obtain it in the first place.

As a 1L6:


I then decided to try this faux 1R5 in my Zenith TransOceanic H500 with pins 1 and 5 still connected together.  While it seemed to work fine on the AM broadcast band, the radio got increasingly deaf with each higher band.  A quick peek with a spectrum analyzer on a service monitor showed that the oscillator was working on all bands, but it was always a low in frequency, causing mis-tracking of the RF filtering with the error increasing as one went up being low by about 600 kHz on the highest (16 meter) band.

There was another problem:  On 19 meters the radio started to oscillate, behaving like a regenerative receiver on the verge of oscillation and on 16 there was just solid hash, indicative of instability - likely because of excess gain.  Referring back to the 1J37B specifications, I'd noted before that the noted maximum indicated screen voltage was on the order of 60 volts - but nearly 90 volts was being applied in the TransOceanic.  Because of the rather low current pulled by the screen grid (being used as the "plate" for the local oscillator) and the still-within-specs amount of plate current (around 3 milliamps).  I wasn't particularly worried about violating this voltage rating as there is no actual delicate "grid" that can be damaged, but it occurred to me that the gain could be reduced a bit by lowering the screen potential.  With a bit of experimentation I determined that a 33k resistor paralleled with a 1000pF capacitor in series with pin 3 of the 1L6 socket reduced the screen voltage to around 65 volts - still a bit above its specifications - but this change resulted in unconditionally stable operation.

Disconnecting the now-unnecessary pin 5 and wielding an alignment tool I went to work re-tweaking the radio.  For all but the 16 meter band, the local oscillator adjustment was well within the range of the various coils and capacitors, but for 16 meters, removal of the local oscillator's slug only brought it to within about -400 kHz of where it should have been.

On the lower bands, particularly AM Broadcast, 2-4 MHz, 4-8 MHz and 31 meters, the radio's sensitivity was reasonably good - not quite up to that of the 1L6 on 31 meters, but perfectly usable nonetheless.  For the higher bands, 25 and 19 meters, I could still hear a bit of ambient atmospheric noise and those radio stations for which propagation was extant, but like 31 meters, the receive sensitivity was still a bit low indicating the need for yet more tweaking.

More tweaking and testing:

I later did a bit more experimentation, adjusting bias and re-dressing the leads, but I could not affect the 16 meter tuning range significantly enough to bring it back into dial calibration, nor could I make a "dramatic" improvement in the high-band sensitivity.  If I'd replaced the core in the 16 meter oscillator coil with an aluminum or brass slug I may have been able to drag it up to frequency, but I didn't try it.

Inconsistency?

I did prepare another 1J37B tube and wired it in an identical manner to the first shown in the previous pictures (but with shorter leads) but interestingly it behaved remarkably different than the first:  It seemed to be much more prone to bouts of spurious oscillation (e.g. broadband noise) and fitful, intermittent local oscillator operation - a state not dramatically affected by swapping the two "first grids" C1' and C1".  Otherwise, the tube seemed to be behaving about the same in terms of DC current as the first.

What this told me is that my initial configuration of using the "screen grid" as the oscillator plate and applying the RF signal to be mixed to the other "first grid" may not be the best approach, as was my initial hunch - particularly in light of the fact that two seemingly identical tubes, both with fairly similar DC characteristics, seemed to behave radically different in this circuit - a strong indicator of a "non optimal" circuit topology that required on "quirks" of each, individual tube!

In the future I may reconfigure the circuit a bit to see if configuring the tube in some sort of "Gammatron" configuration may yield better results - but that will have to wait until I get more free time...

* * *

Additional information about the 1J37B and the "Gammatron" mode of tube operation:

  • The 1J37B at the Radiomuseum - link (Includes discussions about operating the tube as a Gammatron.)
  •  Russian rod tubes at "Radicalvalves" - link (Information about the 1J37B and other "rod" tubes.)

[End]

This page stolen from ka7oei.blogspot.com