text stringlengths 83 79.5k |
|---|
H: Safe route clearance on PCB?
Some chips with high input voltages have a very tiny package.
For instance, power gauge INA226 has a maximum convert voltage of 36V, but it's in MSOP8 package, the distance between two pins in its recommended layout is only 0.22mm, can such a short distance be safe for 36V?
In another way, how much voltage can a clearance of 0.2mm deal with?
Thanks.
AI: Arcing voltage thru air varies a lot since air varies a lot. However, about 1 kV/mm is a rough guide. Of course you want to be well below the arcing voltage on a circuit board. At half that, 220 µm would mean 110 V, so 36 V sounds safe enough.
Usually the PCB pads extend a little past the device pins, so the clearance between pads is the limiting factor.
Added
A comment by W5VO reminded me of something I should have mentioned.
What I said above is a very rough guide that you want to derate heavily. That may be good enough for personal projects. However, for real commercial products, there are likely regulatory requirements that apply. Some of these apply by law, while others are standards you have to meet in order to get third part approvals. Most of the time there is little practical distinction. They are simply rules you have to follow.
There are different requirements depending on what the device is intended for, where it will be used, and what third party certification the buyer or a reseller insists on. For example, patient-touching medical devices in use in the United States must meet certain creapage and clearance distances by law. These tend to be much more conservative that other rules. Intrinsic Safety (standard for electrical equipment in hazardous locations) also uses much more conservative rules than for something like office equipment.
W5VO mentioned IPC rules, but there are also rules from the IEC, local electrical codes, private certifying agencies like UL and FM, etc. Sometimes the hardest part of the design is figuring out what standards actually apply. |
H: Multiple signals connected to relay through octocoupler
Is it possible to use multiple octocoupler transistors as switches connected parallel to the coil of the relay? I am worried about slight voltage differences across collector-emitter. Just like the image only multiple parallel octocouplers.
edit: I will just use transistor as somebody said in the well known OR configuration
AI: Opto-coupler outputs in parallel is a bad idea unless you include some means for them to share current properly. Just connecting them in parallel doesn't do that.
In this case, the reason seems to be to get higher current capability on the isolated side. That can be achieved with a transistor amplifying the current thru the opto instead of using more optos.
You also seem to be using the same ground on either side of the opto. In that case, lose the opto altogether and replace it with a transistor. Those have much higher gain than optos, so a single transistor can be driven directly from a microcontroller output on one side, and drive the relay directly on the other side. Remember that relays provide isolation too. This circuit would still be isolated between the microcontroller and the switched contacts. |
H: Is this electrically safe?
"earth" is connected to exposed touch-able parts on this coffee grinder. It does bite! It has the double insulated mark which per my standards is incorrect.
It has an unearthed input.
Schematic: https://i.stack.imgur.com/25FOs.jpg
AI: The tingle you feel is the current passing from the one interference supression capacitor connected to the live leg of the mains input. The metal parts are effectively held at half the line voltage through a very high impedance (small cap at 50/60 Hz).
It is unlikely to hurt you unless the caps were to fail. The double insulated symbol is probably optimistic. I believe there need to be two insulating barriers between live and any exposed metal parts before this symbol is appropriate, in this case you have one cap that needs to fail before you get full line voltage, a bit unsettling but not all that rare.
If you are up to the task you could replace the capacitors with two modern 500V units if you suspect they are faulty, the tingling may be reduced but unlikely to be eliminated.
EDIT:
I forgot to add that if you have an earthed plug (and as Andy mentioned 3 conductor power cable) and it is in an earthed outlet you should not be able to feel any tingle and any such tingling should trip the residual current circuit interrupter if it exceed about 10 to 30 mA.
Sadly in Europe you can plug an earthed Schuko plug into an old unearthed socket so the earth protection situation is a bit primitive. |
H: Big loop gain vs. big bandwidth
If one was looking for an audio amplifier of high quality, with very low distortion, which one of these two options would be better?
Large loop gain (i.e. \$af\$) but small open loop bandwidth
Small loop gain but large open loop bandwidth
I think that the first option would be the best one, because a large loop gain would mean that the bandwidth of the final circuit (namely, the closed loop bandwidth) would be increased by a factor of \$(1+af)\$, so the bandwidth would not be a problem. But also, distortion would decrease by the same factor, so, as I see it, the first option guarantees a large bandwidth and a decrease in distortion, while the second one only offers a large bandwidth.
However, this is a very general reasoning. I don't really know where that decrease in distortion comes from: does it depend on the position of the open loop poles? Does it depend on anything as well as on the loop gain?
So the specific question would be: which one of the two options would you choose for a very low distortion audio amplifier, and why?
AI: If you want a amp with low distortion, look at the distortion numbers. It shouldn't matter to you how that is achieved under the hood, only that it is.
There are all kinds of ways to trade off parameters in a audio amplifier. Taking one or two parameters in isolation doesn't mean anything. Look at the result, not the method.
That said, simply using large open loop gain, then global feedback to fix everything, isn't how it's generally done. The problem is that such a system tends to suffer from TIM (transient intermodulation distortion). The better systems tend to use some local feedback in each stage, with the overall open loop gain not wildly above the desired closed loop gain. Then moderate application of negative feedback keeps the closed loop gain predictable and the frequency response flat.
Again though, measure results, not how they were achieved. There is more than one way to design good audio amplifiers. |
H: Heating up water with electrical current
Is it possible that (isolated) wires carrying high current, such as 200A, passing through water will heat up that water like a boiler?
I have been told by someone that a fuse panel in a building had its bottom immersed in water and apparently this created a lot of heat and steam. Does this make any sense?
I know that inductive heating COULD possibly occur, but we are speaking 50Hz and no coils but straight wires. Usually one would expect that with frequencies in the kHz range. Resistor-type heating would probably not have occured either since wires in a fuse panel would be highly conductive.
AI: There is no inductive heating since water is not magnetic (in any working sense). So if only an insulated wire contacted the water there would be no current flow and no heating. So your story (if no wire contacts the water) is incorrect, but since power wiring panels have lots of bare voltage carrying conductors in them, I assume it's just someone got it wrong.
Providing the water is impure enough to conduct there will be current flow and therefore heat generated in the water if the wires contact the fluid. Depending on the voltage available there is likely to be lots of heat and steam. There are many (mostly Chinese) shower and water heaters that use this very method, although it is potentially hazardous. |
H: Why does the emitter follower clip?
The emitter follower bellow cannot follow the base voltage Vb after some point:
Vin input is DC sweeping from +10V down to -10V.
The output voltage Vout however stops following Vb at some particular voltage and settles as in the below plot:
Why is that happening?
edit:
Here is my latest understanding step by step fashion(let me know if it is wrong):
Starting the scanario while the transistor is on active region...
As long as the current flows from Vcc to both "Vee and GND" the emitter voltage follows the base voltage.
But decreasing the base voltage gradually will decrease the base current.
This will in turn will gradually decrease the "Ic = beta*Ib" (current passing through the tarnsistor down to -12V terminal and GND).
There is a moment where the base current approaches to zero so Ic approaches to zero.
At this point the transistor starts going to "cut-off region" i.e. stops passing any current.
At that moment the transistor starts shutting-off leaving almost no current through collector to emitter.
Now the current starts to flow from GND to -12 where Rload and Re forms a voltage divider.
The emitter voltage starts to settle to around -2V.
After this point lowering the base voltage even more negative than zero to -1.2V, will start to reverse bias the base-emitter junction.
This is beacuse the emitter voltage is settled to -2V and the transitor is off.
The emitter follower rules doesnt hold anymore.
If the load wouldn't be there the base voltage wouldnt be limited to -2V.
AI: The current in the transistor cannot go below zero although it can go to a very high positive current.
When the input is going in the negative direction the current in the transistor reduces until it reaches zero. If the input goes further negative the base of the transistor is reverse biased the transistor current remains at zero - it cannot drive a negative current into the load.
When the current in the transistor reaches zero the only current driving the load is that provided by Re and the -12V supply. This will provide about 2mA into the load.
In the positive direction the current in the transistor will keep increasing until the emitter of the transistor almost reaches the positive 12V rail (the input will be slightly above 12V at that point).
You can improve the output in the negative direction by reducing the value of Re - it may need to be significantly lower than the load.
Practical circuits with emitter follower outputs will usually have a complementary output stage with an NPN to provide the positive side and a PNP the negative output. That can then provide a symmetrical output.
For example this is the output stage of an LM358 opamp: |
H: The use of cyclic prefix in OFDM to mitigate Intercarrier Interference
I understand that a guard interval (GI) is required to eliminate ISI in OFDM-based systems. In this context, an empty GI could be used.
But, why an empty GI does not remove Intercarrier Interference (ICI)?
AI: The discontinuity between symbols has spectral components higher than the channel width, and because individual channels are not bandpass filtered, it spreads into adjacent channels.
This still happens when the guard interval has a constant value (e.g. zero) -- you have two discontinuities then, one at the beginning and one at the end of the guard interval. The length of the guard interval isolates symbols on the same channel in time only.
With cyclic prefix, you still get a discontinuity at the beginning of the guard interval (where it doesn't matter, because anything received there is ignored), but not at the end of it, where it would affect the following symbol. |
H: If I make the tungsten filament on a light bulb thinner with more twists will it make more heat?
If I make a incandescent light bulb with a sturdy and thicker filament will it also increase the temperature or just make it more durable?
AI: In a tungsten bulb, the light output is related to both the surface area of the filament and its temperature. While the heat output is simply the input power minus the light power (which is usually in the range of 5% of the input power, can be 10% for halogens). Having a thicker filament will decrease the resistance which will cause more current to flow at the same voltage which results in more power and thus more heat, but a thicker filament at a constant power won't necessarily result in more heat, power out = power in.
Heat (watts) ~~= 0.95*input power (watts) for an incandescent globe.
Note: A tungsten heat globe will put out marginally more heat and marginally less light (I say marginally because even in the light globe, 95% of the power is heat anyway) |
H: Does Op-Amp 741 boost the frequency of an input signal
I am trying to design an electronic circuit which does the job of a dog whistle. So basically when a human whistles using the mic of the circuit, the circuit boosts the frequency to at least 50kHz or so. I am thinking to use op-amps in the circuit to boost the frequency but I am not sure if it will boost the frequency. Can anyone help me with this please?
AI: There have been interesting frequency doublers that use fullwave rectification of a clipped signal to get the loudest signal component doubled in frequency.
With some work you may be able to do this. Use an ALC, suitable band pass filter (say 8 to 14 kHz) an amp and clipper, fullwave, level shift, amp and HiFi tweeter.
The other alternative in the modern world is to use a DSP or fast micro-controller and do continuous Fast Fourier Transform on an input signal and adjust gain to keep the calculations sane.
EDIT:
I forgot to add in the DSP option you would have to shift the spectrum in the frequency domain and then do an inverse FFT. The signal quality does suffer but that is normal for digital time<>frequency convolutions.
Also another option was to see if any of the old pitch shifting ICs or gadgets were still to be found. They were popular as voice changes and I seem to remember some could do simple pitch changes. I donot recall what method they used, perhaps a simple DSB or AM modulation and then back again at an ofset frequency. The signals (voice) would be garbled pretty badly as the upper and lower sidebands would overlap badly depending on there the frequency fold occured.
A possible analogue domain solution is also to modulate the signal into a SSB at some handy intermediate frequency with high quality sideband reduction filter and then modulate it back down again to an offset frequency and then high pass filtering to clean it up, the quality would be better than the DSB/AM option but still hard on the ears, perhaps the dog will not mind. This would be more involved but is sure to have been done before. This (and the previous) technique woud allow for frequency shifting as opposed to doubling.
I just remembered that the fix for the very poor performance of the fullwave rectified solution was to use an analogue squaring/self-multiply circuit as this would preserve the amplitude somewhat. This is sensitive to amplitude and signal noise but a bandpass filter and AGC to preceed might get something out. This would be a frequency doubler like the fullwave rectified system. |
H: Why does the Collector-Emitter voltage need to be ≥ 0.3 V?
I have been going over transistors in class lately and one of the "rules" is that the transistor needs minimum voltage drop between C and E ie, VCE ≥ 0.3 V.
However, no derivation or much explanation was provided to understand why this is true. I have searched in some sites for this as well but haven't found an explanation. Is there something I am not seeing?
AI: If you are designing a transistor circuit to switch and to have low losses you would like Vce to be as low as possible. You would like the transistor to be well into saturation. For a relatively low current single silicon transistor something like 300mV is a reasonable measure of it being well into saturation.
For example, consider the 2N4401, a common NPN TO-92 transistor. Here is the typical behavior in saturation:
As you can see Vce(sat) of 300mV covers the useful range of the transistor.
You could equally well pick 100mV for collector currents of less than 100mA, it's just a reasonable choice based on how real transistors behave.
Neither may be realistic for power transistors. Here is the similar graph for the 15A 2N3055:
As you can see, even at 10A and a forced beta of 10, you'd be lucky to get under a few volts Vce. |
H: Running resistors above power rating, but with temperature within spec
I am building a circuit to heat a very small area to a relatively high (~100C) temperature. To achieve this, I'm using an SMD resistor controlled by a MOSFET, with an adjacent thermistor for temperature monitoring and control. To achieve faster heating I'm considering running a higher power through the resistors than what they are rated for, at least until they reach the needed temperature. As long as I control the temperature (via PWM control) of the resistors and maintain their temperature within what they are rated for (typically around 125C) are there any issues that would result from doing this?
AI: I'd suggest that small SMD resistors would be questionable, but you certainly could use very small power resistors to achieve your goal.
This type of power resistor might suit your need: http://www.vishay.com/docs/51055/d2to20.pdf
You have not been specific about your needs in terms of volume, but resistors like this have an SOA out to 140 degC. It would seem possible to drive this sort of device with a PWM to provide the temperature increase needed.
I've also used two TO-220 power transistors clamped around a crystal to make a crystal oven at 80 degC...worked well.
Depending on your application (and since you are already using a FET to drive the resistors) you could simply use the FET as the heat source. By holding VDD constant for an N-Channel FET at some voltage level you can dissipate all the heat required directly in the FET. All the same rules apply in terms of temperature, but most power FETs have an SOA out to at least 125 degC so 100 degC sounds pratical. You could sense the current through the FET using a small value resistor.
You could even use a linear power regulator in anything from TO-92 to TO-220 or even SMD D2PAK to provide the heat. Just configure in a constant current circuit and modulate it from the feedback pin divider. |
H: Can 230 voltage device use 220 voltage?
I have 230 voltage appliances, those are gifts from Germany. I live in Vietnam where the voltage is 220 V.
So can I use those appliances safely in my country?
AI: In EU, it was not always the 230VAC, but also 220VAC. Then some decades ago they changed to 230VAC, but still the old machines are 220VAC and they are all working.
http://www.schneider-electric.co.uk/en/faqs/FA144717/
Currently, ALL Western European supplies are classified 230VAC. In
reality there is no 230VAC supply unless you create one locally.
230VAC was a “standard” created during European "harmonisation" to
give a single voltage standard across Western Europe, including UK and
Irish Republic.
Although the ideal would have been to have a single voltage there were
too many political, financial and technical obstacles to reduce UK
voltage to European levels or to increase European voltage to UK
levels, so a new standard was created to cover both. This was achieved
by changing the tolerances of previously existing supply standards. UK
voltage to 240VAC + 6% and - 10% and European to 220VAC +10% and -6%
(thereby creating a manageable overlap) and we would call these two
combined 230VAC, despite the fact that nobody was intentionally
generating at 230VAC!
You see, it's just politics. Some they have still 220VAC and UK has still 240VAC, but we can say it's 230VAC with high tolerance window. |
H: need to understand the difference in delay for rising and falling reference
I am rigging up a circuit which needs to generate two signals.
One output signal(R_B) is same as input and the other signal(R_A) is supposed to have a short delay of 5 to 10 ms compared to the input.
The figure below has the waveform and the circuit.
Question:
Why is the delay is different when the input rises and the input falls?
any hints towards matching the delay is appreciated.
Edit: after adding a resistor to the base pin and a little balancing, able to achieve 10 ms.:
AI: For the ground to 3.3v swing the voltage across C1 changes from zero to about 0.7 volts where the transistor is turned on. That takes time. But for the 3.3v to ground swing, the voltage across C1 drops below 0.7 volts almost immediately on the way down to zero volts. So the transistor turns off almost immediately.
Redesign the circuit such that the capacitor can charge beyond the Vbe 0.7 limit by adding a resistor in front of the base. Allow the capacitor to charge just to about 1.4 volts. That way it should take about as long for the capacitor to discharge from 1.4 to 0.7 volts on its way to zero volts as it does to charge from zero to 0.7 volts on its way to 1.4 volts. |
H: Relay data ambiguity
I found a relay with some characteristics written on it.
The coil should be powered at 12VDC.
The other data is unclear.
IEC 255 3A 150VAC
5A 240VAC - AC1
5A 28VDC -DC1
So, in AC, the contacts should work at 5A 240VAC - AC1.
What is the meaning of 3A 150VAC vs 5A 240VAC - AC1?
Shouldn't I be able to run 5A 150VAC through my relay?
AI: Different standards use different load conditions (load inductance, temperature), and different definitions of 'end of life' (contact resistance change or total failure). So you would expect different limiting use conditions depending on what standard it's tested to. Look up IEC255 and see what the conditions are.
In the solid state Tesla coiling community, a common user standard is 'turn it up until it explodes, then back off a bit', which results in higher use currents than the boring old data sheet tends to suggest. I wouldn't propose that standard for, say, safety critical flight electronics. |
H: How do GPS satellites refresh their clocks
How do GPS satellites keep their on board clocks accurate? I assume that they need to get update from a base station. But how do you make sure that after the update all the satellites are synchronized, i.e. there isn't any phase shift.
You have your base station on earth, and assume that all the satellites you want to update are in line of sight. You send an update command. But, each satellite is a different distance from the base station. There will also be a delay from receiving the command, to updating the internal clock. Some satellites may have newer hardware, which is faster.
If you update the satellites separately, you would need to ensure that your timings of the commands that you send are very accurate. This seems like a difficult thing to get right. Is there a better method that is used in practice?
I guess what I am interested in is say you have a clock at location A. How do you synchronize it with a clock at location B, which is far away from A? You have the message flight time delay, processing delay in B etc.
AI: Clock errors are not corrected, they are compensated in two steps.
1. Error determination
The GPS control segment uses reference receivers in well known locations to determine the actual orbital elements and the clock error of space vehicles. The reference for position is the WGS84 reference frame, for time it is the international atomic time. Even the smallest effects like continental drift and relativistic time dilatation are taken into account.
2. Error Compensation
The onboard clock (in fact, the SV Z-Count, see IS-GPS-200 3.3.4) is not tuned, slewed or reset to compensate for the error. Citing IS-GPS, 20.3.4.2:
Each SV operates on its own SV time
Instead, the offset between UTC and this spacecraft's clock ("GPS-Time") is broadcast in the navigation message (see IS-GPS 20.3.3.3.1.8). This does not only include the current offset, but also different forecasts ("fit intervals", 20.3.4.4). Normally, only the highly precise short term forecast is relevant, the others would be used if the control segment is inoperable and no uplink is possible.
Likewise, the position error (deviation from nominal orbit) is left uncorrected (this would deplete precious fuel), but is broadcast to receivers by uploading ephemeris data (orbital elements) to the spacecraft.
Time of flight is no issue for the uplink, as the new fit interval data has already been determined in the previous step.
The actual compensation is then done in the receiver (user segment). It applies corrections when relating the observed signal/code phase of different SV.
Exceptional situations
Sometimes, old spacecraft behave in unexpected ways, for example their clocks begin to drift unpredictably. AGI has a website with performance data of onboard clocks. You can see, that USA-151s clock (sending PRN28) is a little bit shaky and needs frequent compensations.
If a clock goes wild or a powered maneuver makes the SV unusable for navigation the SV sends an "inoperable flag" in its navigation message and is ignored by end users' receivers. |
H: Making ESC for RC truck; Max motor Amp unkown
Thanks to Those Who Take the Time
The Rc car i've been using died due to its Esc being burnt out by a 3S lipo battery, and i want to ultimately replace it with something. If i were to be creating an electronic speed controller (ESC) for a specific motor. Whereas specs such as the max amount of current under load you can put into the motor without having it meltdown, do not appear on the motor and are not listed elsewhere cause concern. this keeps me in a certain situation where i am stumped on planning the ESC. On the other hand my understanding on how the ESC works with controlling the power is missing a few things.
For my motor situation, the motor itself is rated at 4000Kv (rpm per volt), i am assuming that is the unloaded max speed it will go. i am also under the impression it will eventually get really hot, and burn up if i continue to have it operate for a prolonged period of time at that rate. I do not what that to happen, but i want to be able to have the ability to go just as fast as the stock ESC, and not have it burn out the motor or ESC while under load. To do that i do not have a clear understanding. Im thinking i need to find max amount of current it will take without burning under load. in order to be able to program the ESC and plan the hardware to fit the specific motor. Is that True, is there a different way i should be looking at this?
For the ESC Issue, its unclear to me what causes it to fry. if it cannot resist the load to the motor to have it operate at a safe temp will it fry certain parts as my old one did? where does the hardware come into play to prevent this? How do i make the ESC to be able to manage a 2S lipo that has 7.4v, and a 3S Lipo that has 11.1v battery without burning up and be able to keep the motor at a safe temp. With features of a forward, reverse, a BEC for Servos and etc, a over heat failsafe if ran too long, a low voltage cut off to save batteries, and control acceleration? That being said what do you think would be some other things i would need to consider?
To conclude do i need to find the "max amount of current under load" in order to keep my motor safe, what else am i missing? When a ESC
fries what causes it and how could i prevent it? How should i be viewing its overall programming process and hardware operation of controlling current along with other features to Then optimize speed and avoid burning up itself and the motor.
Here is some info from the HPI Wesbite about the motor
"Upgrade to a zero-maintenance brushless Flux Vektor 4000Kv motor for the >ultimate in performance and ease of use! The Flux Vektor 4000Kv brushless motor >is standard equipment in the Savage XS Flux, making it the perfect partner for >any off-road vehicle for fun running. It can handle 7.4 volt (2S) or 11.1 (3S) >LiPo batteries as well as 7.2v NiMH batteries for a range of possibilities! " >Features:
- High torque/high temperature neodymium magnet
Oversized precision sealed bearings
3.2mm 1/8" hardened steel shaft
4 Pole rotor for more torque
Triple insulated windings for long life
Black anodized case
Easy to Install
Compatible with any sensorless ESC
-- P.S. if you can point me in the right direction of some useful text or experience that would be also appreciated.
AI: MUCH more information is needed.
With no motor spec the max allowed current is the current it is not damaged by. If you don't know that we can't tell you.
A say 4000 kv motor will nominally run at about 30000 RPM on a 2S LIPO and about 44000 RPM on a 3S LIPO and ... .
You cannot tell from that what is safe. eg at 88000 RPM on a 6S LIPO it's a race between whether it or you will die first.
Wire thickness, body size for type (in/out runner) and comparison with similar with known specs will give some guide.
ESC current is limited by PWM of available voltage.Making your own implies enough knowledge or a cookbook or ...? So - how do you propose to go about making your own? Do you have a design or circuit or ...?
ESC current capability is limited by the ratings of the power FETs used and by the cooling provided. Thermal power loss is largely a factor of FET on resistance and maximum current.
Resistive thermal power per FET ~~= (Imax)^2 x Rdson_FET / 3.
eg for a say 60A ESC Imax = 60A which each FET handles for 1/3 of the time.
For a say 10 milliOhm Rdso FET tyermal power per FET ~~=
(Imax)^2 x Rdson_FET / 3
= (60)^2 x 0.010 /3 = 12 Watts
Some serious heatsinking is needed.
IF you can use FETs with Rdson = 2 milliohms hot at full load the power/FET drops to 2.4 Watts nd far more modest heatsinking.
For more $v you can get 1 milliOhm Rdson FETs (and lower) and may 'get away with minimal heatsinking if 60A is used only occasionally in brief bursts. Thereare other losses (switching, gate drive capacitance loss, ...) but raw resistive switching tends to dominate until you get quite capable low Rdson FETs. |
H: PCB Dimensions definition in Altium Designer
I am designing a PCB to be cascaded to this PCB shown in the following picture.
I wonder how can I define the dimensions in Altium designer. Thank you.
AI: Using PCB Board Wizard with the following setting 530 mm x 1470 mm:
Will create an 530 mm x 1470 mm sized board. Verification with the inbuilt measurement tool (Ctrl + M).
Can be changed later:
View » Board Planning Mode, commands that support interactively
changing the shape are available in this mode:
Redefine Board Shape - select this command to interactively draw a new shape.
Move Board Vertices - select this command to interactively modify the shape of the board by moving vertices or sliding the edges
of the shape.
Move Board Shape - select this command to move the complete board shape to a new location in the workspace. Note that this command
only moves the board shape, other objects that have been placed in the
workspace are not moved. To move the board shape as well as all placed
objects, select everything to be moved and use the Edit » Move »
Move Selection command (switch to 2D Layout Mode first). Taken from
Altium Tech Doc. |
H: Bias voltage of non-inverting op amplifier drops to 0 when input signal connected
I am working on a non-inverting opamp amplifier, using OPA2134, to amplify the 150mV peak signal (1 kHz sine wave playbacked) from a phone audio jack to ~4V peak sine wave, that drives a LED using a N-channel MOSFET. My goal is to turn on the LED by playing music from the phone.
I have biased the non-inverting input of the opamp to 2.5V using a simple voltage divider from 5V. I want the opamp to work with a single supply (+5V).
List of signals:
OPA2134 V+ - +5V
OPA2134 V- - 0V (GND)
Input Vin- 150mV peak, 1kHz
sine wave.
Desired output - ~4V peak, 1kHz sine wave.
Bias voltage - 2.5V
The input is ac-coupled, gain is set to unity at dc, and the output is also ac-coupled. I am currently not concerned with the noise from bias divider.
If the input is not connected to the phone audio jack, the inpt is floating, and the non-inverting input of the opamp is at 2.5V, the output of the opamp is also at 2.5V and the output after the output capacitor is 0V.
If however i connect the input to the audio jack the bias voltage at the non-inverting input drops to 0V. If then i play audio from the phone, the noninverting input has a signal of 1kHz 150mV peak wave instead of 2.35V to 2.65V and there is nothing at the output, since the opamp can not work with an input signal so close to it's rail.
Here is my circuit, that simulated in falstad operates as desired. For simplicity I marked the opamp max voltage to 5V and min voltage to 0V in falstad. I know that in real life, the OPA2134 can not operate rail-to-rail.
The signal on the left is the input (-150mV to 150mV), signal on the right is the output (0V - 5V)
AI: Are you sure that the 100nF cap isn't short circuit? Try replacing it. |
H: Calculating pitch and roll from gyro data
Does anyone know if it's possible to calculate pitch and roll angles from gyro data (without an accelerometer)? I'm trying to use a L3GD20 to measure pitch and roll. I can read the angular rates, but how should these values be converter to pitch and roll angles?
Any help appreciated.
AI: If you have the kind of gyro that tells you its rate of rotation, you have to integrate that to get rotational position. This means errors accumulate over time, so any such rotational position has a limited useful lifetime from the last known rotational position.
Cheap MEMS accelerometers can track rotational position for a few seconds. Systems costing 10s of k$ can produce useful rotational position for hours. Mechanical gyros that inherently give you angles instead of rate of rotation don't have the problem of integrating errors, but have other long term effects. Still, such systems, called intertial navigation, have been used in spacecraft and airplanes.
The famous Korean Airlines disaster where a flight from Alaska strayed off course over Kamchatka and got shot down was due to improper initialization of the inertial navigation system on the ground before takeoff. Even very small errors can accumulate to many miles of error after a few hours. |
H: Problem in FSK demodulation
I have a FSK modulated signal of binary data. This I am trying to recover using PLL.
The mark and space frequencies are in the range of 1.5MHz to 7MHz.
I am struggling because, the PLL output is not stable, in the sense that the signal is varying like a sine wave and this makes the decision process difficult (Because I am not able to fix the threshold as "0" or some other value).
Here is the PLL LPF output:
I know I cant use this directly for decision making so I am using a low pass equiripple filter after PLL, which gives a pretty good signal which I can use it for decision making. But again the probelm is the signal is varying like a sine wave and not proper horizontal variation, which is what I am looking for.
Here how the LPF output is:
All I want is that, to keep the center of the signal "0" at all time instant. How can I achieve that with simulink?
Here are the PLL, block parameters:
Filter: [1]/[1 2*pi*0.001]
VCO input sensitivity: 5 Hz/V
VCO Quiescent frequency: 10000 Hz
VCO initial Phase: 0
VCO output amplitude: 10V
Important Note: I am looking for demodulation at different data rates, so I want to vary from sample time in Bernoulli binary generator from 1 to 0.00001 i,e from 1bps to 100kbps. What I have observed here is that as the data rate changes, the property of the PLL filter output changes and which makes me to readjust the PLL for every significant change in this sample time (or as I referred it as data rate). For this if someone can give me tips or solution to demodulate the signal at every rate without re-adjusting the PLL, then it will be a lot of help.
AI: Well, if you can stabilize the PLL a bit more this is going to provide better pay-back when it comes to recovering the data but, in the absense of any improvement, you could try passing the signal through a high pass filter that has a cut-off close to the data rate frequency.
It should remove the lower frequency up-and-down wobble (to a large extent) but will tend to turn your data into edges like this: -
And, to recover the "differentiated data you could use a comparator with significant hysteresis or use two thresholds (either side of 0 volts) that set and reset a flip-flop. You could even low pass filter the data signal to make a signal that tends to follow the undulations (coming by the PLL) and use this to modify the thresholds to counter any undulations remaining after high pass filtering.
I've had to do pretty much that same on one job I was involved with and was very happy with the results. The amount of undulation superimposed on the signal was very similar to your example but, I would certainly start by trying to improve the PLL because any gains in removing the unwanted undulation makes life easier on the detect (data slicer) circuit.
Following on from the edit in the question I have a couple of observations about the PLL parameters: -
I think that your VCO quiescent frequency should be about 3 MHz and not 1 Hz
Given that the PLL output is up to 10 volts I think the VCO sensitivity is OK at 1 MHz per volt providing you make the centre (quiescent) frequency as I suggested.
What I have observed here is that as the data rate changes, the
property of the PLL filter output changes and which makes me to
readjust the PLL for every significant change in this sample time
When your data rate slows down and heads towards the low-pass frequency of the PLL loop filter you might get problems that are dependent on your application. Some applications allow the PLL VCO to track the modulated frequency (a fast loop filter) whilst other designs will want to keep the VCO at roughly half way between the two FSK frequencies (slow loop filter). It's unclear as to what your design is. |
H: Using wall wart as 9v Battery Backup
I have a CO/Natural Gas meter that requires a backup battery to be able to function (without beeping), even if it is plugged in. I find that I am replacing batteries every couple of months (I have 4 in my house). I am wondering if I can use a wall wart for the 9v supply. I know that the 'backup in case of power failure' function would no longer be present, but that is not a concern of mine.
Better yet, can I hook up the 9v supply that comes with the unit (9v 400mA) to the 9v battery connector, so that the supplied 9v adaptor effectively runs both the unit's main power and it's backup? I have opened up the unit and the soldering would be incredibly easy.
Thanks in advance for any options that may be forthcoming.
AI: Not knowing the circuit sensitivity to overvoltage and knowing wallwarts will be unregulated at 50% higher voltage with no load, 40% from rectified rms and 10% for DCR loss in transformer at full load, you can certainly choose your own appropriate supply if you understand the risk of overvoltage and need for reasonable regulation 10%.
Why do you have 4 gas meters? You mean Carbon Monoxide smoke alarms. How many hours of power loss per 2 month? You could run a 9 V LDO at each meter from a 12V SLA and feed from spare wires in telephone jack if only using 1 line. The monitors may need low ripple so measure both Dc and AC V to ensure proper performance with self test.
Another battery solution is 9 V Lithium which last 2-3 years or more from online sources. $10 approx. or use 3x CR123A Lithium in series @$1 each in bulk online.
added
From rough analysis in comments I calc. 1.6mA drain on battery backup for 8 mos life being powered from internal AC-DC supply due to automated battery test, (guess) Lithium 9V , 3x to 5x capacity of Duracell depending on specs . avail at Lowe's $11. **I might consider 1k to 330R to internal 9V dc supply but measure to null current drain from battery over a period.* |
H: Switches for PCB: Mounting and hole questions
I'm designing a pcb board and encountered a problem for switches. Is it common to follow the schematic and create square holes for mounting the switches?
Please help. Thank you.
AI: For this part to properly be mounted to the PCB, it needs to have holes slightly bigger than the pin sizes. In the datasheet, there is always a section that shows the required hole sizes as well as the measurements for each pin.
The two squares holes do not necessary need to be squared. They can be round holes. Round holes are the most common type of holes on a PCB for through holes components. Just make sure the diameter of the round hole exceeds the diagonal length of the square and you should be fine. You will require a bit more solder to fully mount the switch but it's nothing to be worried about. |
H: Split power supply smoothing capacitor
In all split power supplies there will be at least two capacitors, one in between +V to ground and another in between -V to the ground. In an 300W (RMS output power) amplifier there are two 10 000uF 63V capacitor in each voltage lines.
I have one unit of 68 000uF 125V capacitor which I am planning to install straight between the +V and -V in that power supply.
This will improve or degrade the performance of the amplifier? This connection is obviously safe in electrical perspective, but such connection, at least in my knowledge is never been used in any power supply design.
Pls advice.
AI: @soosai steven since your speaker is grounded and amplifier is push pull for each polarity of signal, current is best supplied from each supply and ground to speaker to reduce impedance of source power, not between V+&V- . Understand that ESR of large caps may be big and consider you want 8ohm * C = 10*10ms for max out at 8* ripple. But ESR of caps can be big and needs careful selection of quality and additional shunt caps.
I will try to explain but it may be complicated.
Check cap dissipation factor D.F. or measure at 100Hz. If you have a woofer for 25 Hz then D.F. at 25Hz is 4 x worse. compute ESR and determine impedance ratio for woofer.
What dampening factor do you want for woofer? This makes bass clear or muddy from back EMF of cone mass.
Normally 50 is weak, 100 is ok , 1000 for the best PA's for best bass clarity punch like a great bass drum with dampening blanket and tuned port. Thus ESR must be << 8 Ohm/dampening ratio or 80 mOhm for df=100.
This Dissipation Factor for you, 100Hz (for me 120 Hz) ripple current heat ( Ipk ^2*ESR) with 10x speaker current at<10% duty cycle for 10% ripple voltage at full load. You may want better than this 10% to prevent 100Hz distortion.
The dampening factor is for bass step response (mechanical ringing from coil back EMF) are related both to ESR of cap and output impedance of power amp. The power supply ripple is in series with your PA and speaker and ripple reduction depends of feedback. Too much open loop gain, it will oscillate, too little and PSRR suffers. This is a design tradeoff. Even with this, a 20kHz snubber on output is essential.
It gets more complicated to explain but PA's are low voltage gain but very high current gain and supply ripple V sensitivity is poor due gain feedback and low open loop V gain , unlike preamp.
Thus impedance of cap must be very low 1/(2pifC) for lowest bass frequency 1 to 2% of speaker and ESR must be less. Since I know you can compute these, I'll let you decide. When I built my own amp in 1973 , I used 100k uF 63V, caps that were "computer rated " for mainframes with 100A ripple current rating. Then I added 470uF solid tantalum caps.
68kuF will sound better with two, one for each rail, but verify DF or ESR.
If you don't have a scope or spectrum analyzer , use Audacity to sample a scaled signal into PC aux input port, using sweep gen and Spectrum Analysis (free) or measure ESR of caps and compute your distortion and heat loss. Caps are thermally insulated, so the RMS current rating must have good margin for low T rise and long life . You can also use Simulators and add ESR to see the effect.
good luck. |
H: Sources on designing a precise feedback loop for designing a linear power supply
I am a student, and studying Electrical Engineering now. I was tasked with building a 24VDC to 12V dc linear power supply. One of the requirements was big precision of line/load regulation (up to the fourth decimal). From what I read, this is achievable by using a feedback loop, however, I cannot find any reliable sources on how to do that.
Where can I read about designing a precise feedback loop?
AI: Precision of a control loop under static conditions is mostly to do with the open-loop gain of the feedback controller and DC errors due to offset and bias currents. All can be made relatively small by using precision components.
That means that your voltage will match the reference within a component of error that is inversely proportional to the open-loop gain of the feedback controller. There are also errors arising from the offset voltage and changes in the offset voltage of the feedback amplifier, and from the amplifier bias currents.
Of course errors in your reference voltage are going to be directly reflected in the output voltage. If there is a voltage divider from the output voltage to your reference then any error in the ratio will be reflected in changes in the output voltage. Offset voltage errors in the amplifier will be multiplied by the inverse of the division ratio.
You can read more about the effect of loop gain here in Analog Devices Mini Tutorial Op Amp Open-Loop Gain and Open-
Loop Gain Nonlinearity. The below schematic shows some sources of error other than loop gain.
simulate this circuit – Schematic created using CircuitLab
Control loop stability and performance under dynamic conditions is not a small subject- you could easily spend a few semesters studying it and still only be scratching the surface. When the input changes or the load changes quickly you will generally see a transient error that is larger than the steady state error as the loop corrects. It may overshoot or undershoot, depending on how the control loop is tuned. If tuned to prevent overshoot or undershoot it will generally be more sluggish in response. |
H: Program SPI flash on board after soldering
I'm developing a USB DAC using XMOS XHRA-2HPA as USB->I2S interface. XHRA-2HPA is a sort of highly specialized uC which requires a firmware and configuration data to be stored on external SPI flash chip.
I want to install a service connector on the board to be able to reprogram this flash. First I need to be able to power up flash chip without powering up the entire system. It can be done with a sort of manual jumper. But I'm in doubt if unpowered USB interface chip connected to the same SPI bus in parallel can affect proper bus operation.
AI: Why can't you still power the rest of the board up while you are programming the audio signal processor via the service connector? You could hold your processor in reset so it doesn't interfere with anything -- all pins should look like inputs. |
H: Signal Conditioning of the conductive Polymer?
I have a conductive piezoresistive polymer which gives the resistance changes according to the applied force and strain. What could the first step to start the signal conditioning. The goal of the Signal conditioning is to have the signal output is between 0-5 volts for the input force.
AI: The first step would be to determine what the magnitude of the resistance change is going to be in your application. You'll want to check whether that change is linear or highly nonlinear over the range of operation, and determine how much accuracy and resolution you're going to need. You'll also need to determine any additional constraints, such as how much voltage and/or current you can apply to the material.
Once you have that kind of information, you can start to think about how to convert the resistance change into a voltage signal you can measure. |
H: Powering CO2 sensor with Battery
I have a K-30 CO2 sensor connected to a Raspberry Pi 3 model B. When I connect the sensor to the Pi via serial and power the sensor with the Pi, my program works fine: the CO2 concentrations are displayed on the screen.
On the other hand, when I try to power the sensor with a portable battery, the program does not work: the Pi recognizes that the sensor is connected, but it doesn't read any values from the sensor.
I know the sensor is receiving power from the battery because there is a light blinking on the sensor and the Pi recognizes that the sensor is there. Also when I measured the voltage running through the wires from the battery, it says 5.3 V, which is sufficient for the sensor according to the data sheet.
I have the sensor ground and power ports connected to the ground and power wires of a USB cable, which then plugs into the battery.
Any ideas as to why this isn't working?
K-30 CO2 sensor
Battery
AI: Three possible reasons.
You didn't connect the ground of the battery/sensor to ground of the RPI.
The battery sees too low of a draw and goes to sleep. Common power save feature on newer power banks.
The switching supply on the power bank is too noise for the sensor. Try adding a filter cap or two. |
H: What does MKBS mean?
I found some WIMA MKBS capacitors, not a dielectric I have seem before. WIMA has not been helpful so far (nor Google). Any idea what MKBS means?
AI: Apparently, they are obsolete "Metallized polycarbonate capacitors". Not sure how critical is your application, so you may or may not want to trust an ebay listing.
I also found this datasheet for MKB3 and MKB5 capacitors. May be useful, may be not. |
H: Second Order Bandpass Filter vs Second Order High Pass and Low Pass in Series
Is using a second order bandpass filter less effective than using a second order high pass filter and a second order low pass filter one after the other?
I tried a bandpass filter to filter out frequencies between 0.5Hz and 15Hz, and found that there was still quite a bit of noise.
Then I used a second order high pass filter and a second order low pass filter one after the other and found that that eliminated more noise than a bandpass filter.
Why would a bandpass be worse than a high pass and low pass one after the other?
AI: A second order bandpass filter has a first order roll off rate on each side of the pass band. A second order low pass filter in series with a second order high pass filter has second order roll off rates on both sides of the pass band. The two are not equivalent. |
H: Can I run custom firmware on a pre-certified RF module without needing to redo full FCC certification?
I need BLE connectivity in a product, so to save costs on FCC certification, Bluetooth SIG membership, etc. it will use a pre-certified module for this purpose.
Usually such modules work in one or multiple of the following ways:
Controlled through an external interface (e.g. AT commands over UART)
Programmable using some proprietary scripting language (e.g. Bluegiga modules)
Fully user programmable
The first two approaches seem logical, the manufacturer of the module has very tight control over what can be done with the hardware (especially RF parameters, etc.).
The third approach is very appealing - most BLE chips already contain powerful microcontrollers, so it make sense to do all the processing on them. However I'm not sure whether one still gets the simplified FCC certification. I was under the impression that firmware is part of FCC testing too, especially since RF parameters (usually) can be changed significantly by software. However I wasn't able to find any warnings/clarifiactions about that in the datasheets/app notes/other docs. Could anyone clarify this a bit?
I'm mainly looking at nRF52 based and Cypress EZ-BLE modules.
AI: It has been my experience that you would have to re-certify if you are replacing the firmware in the module.
Be aware that despite the pre-certified nature of certain modules that ultimately you or your company are responsible for meeting the legal requirements for emissions and immunity of the product as a whole. So even though a module may be pre-certified you still need to test your system with the module present.
The only place I know of where a pre-certified type module may have some decided advantages are as follows:
For an embedded module you at least know that the pre-certified module is much less likely to be the source of issues for your system.
A module like an analogue telephone line modem that probably carries several pre-certs may excuse you from testing the analogue phone line requirements whereas the emissions and immunity part may need to still be qualified with your system as a whole.
Externally applied attachments to your product such as power bricks and cords can come pre-certified and safety inspection labeled such that you do not have to deal with that over again for your product. (A very common reason so many products today have external power bricks or wall warts). |
H: Companion capacitor model in circuit simulation
Im looking for a very simple example of an algorithm using the capacitor companion model. I have searched online in several different combinations to try to find an example but cant find one that gives me all of the information I need.
I just need to see how to model an RC circuit with a dc voltage input. I have found the equations but I need to see an example of how to implement it, I am trying to implement it in matlab.
Equations I have so far are:
Req = DeltaT/2C or DeltaT/C
Ieq = -2CVo/deltaT - Io
Again I keep finding these equations and terms like "Trapezoidal method" and "Eulers Method" but I cannot find anywhere a simple easy to follow example of how to actually use these equations.
I understand that the equations are using previous terms to calculate the next term but again I cant find example where the problem is worked out.
I understand I could by spice simulation books but this is really just a specific question, I dont want to buy a 50$ book just to find out how to do this one thing.
simulate this circuit – Schematic created using CircuitLab
Also I am assuming that at T = 0, the voltage across the capacitor is O.
AI: An ideal capacitor is modeled using the equation:
\begin{gather}
I(t) = C \frac{\partial (V_a(t) - V_b(t))}{\partial t}
\end{gather}
where \$V_a\$ and \$V_b\$ are the nodal voltages the capacitor is connected across.
Assuming we don't have an analytical expression for these nodal voltages, we can apply some simple approximations for the derivatives.
For example, the backward's Euler approximation gives (if this looks familiar, it's because we're approximating the slope by a secant line!):
\begin{gather}
I(t_1) \approx C \frac{(V_a(t_1) - V_b(t_1)) - (V_a(t_0) - V_b(t_0))}{\Delta t}
\end{gather}
Since we are given the initial conditions, we know what \$V_a(t_0)\$ and \$V_b(t_0)\$ are. While everything else is unknown, we have a "quasi-static" problem which can be solved for at \$t_1\$ which no longer has any time derivatives. Re-arranging this approximation, we get:
\begin{gather}
\frac{\Delta t}{C} I(t_1) + (V_a(t_0) - V_b(t_0)) = V_a(t_1) - V_b(t_1)
\end{gather}
Notice that this equation models a system which looks like this:
simulate this circuit – Schematic created using CircuitLab
where
\begin{gather}
R_{th} = \frac{\Delta t}{C}\\
V_{th} = V_a(t_0) - V_b(t_0)
\end{gather}
Now we just need to replace all the capacitors in our circuit with this "quasi-static" capacitor model and we'll have a circuit which we can solve for using the standard techniques for static circuits (modified nodal analysis, mesh analysis, etc.).
Once we know the solution at \$t_1\$, we just rinse and repeat to solve for the solution at \$t_2\$, knowing the solution at \$t_1\$, etc.
More advanced approximations of the time derivative go through the same process, the only difference is the approximation made to get rid of the time derivative is more complicated.
As a last note, if you're using a nodal analysis-like static solver, notice that the approximation circuit introduces a new node. While in theory you could live with this and solve for the voltage at this superfluous node, recall that you can easily replace this Thevanin circuit with its equivalent Norton circuit. This removes the extra unknown, making solving the system of unknowns faster.
As an easy example, take your RC circuit, and replace the capacitor with this quasi-static model:
simulate this circuit
Assuming the capacitor is initially uncharged, then at time \$t_0\$, \$V_{th} = 0\$, so we find that at time \$t_1\$:
\begin{gather}
V_a(t_1) = \frac{R_{th}}{R + R_{th}} V_s(t_{1})
\end{gather}
To advance from \$t_1\$ to \$t_2\$, now \$V_{th}(t_1) = V_a(t_1)\$. So:
\begin{gather}
V_a(t_2) = \frac{R_{th}}{R + R_{th}} (V_s(t_{2}) - V_a(t_1))+ V_a(t_1)
\end{gather}
You can repeat this processes indefinitely to find what the voltage at \$V_a\$ is at time \$t_{n+1}\$, which is given by:
\begin{gather}
V_a(t_{n+1}) = \frac{R_{th}}{R + R_{th}} (V_s(t_{n+1}) - V_{a}(t_n)) + V_a(t_n)
\end{gather}
Note that I manually solved the "static" circuits by hand using standard techniques. Explaining how to write one is outside the scope of this question, I refer you to these notes on modified nodal analysis if you want to learn how to do this. |
H: Is there such a thing as filtered LEDs?
I'm trying to "re-engineer" (the original designer doesn't manufacture the board anymore but they were kind enough to provide me with the diagram) a circuit that replaces an incandescent bulb with a LED, and it specifies a white LED "filtered" to 3000K. My question is, does somebody out there make a LED with an integrated filter, or do I have to add an external filter to the LED?
EDIT:
I should have probably said this in the beginning, but the LED is going into a light meter, which is probably why it's filtered down to 3000K. The meter is this one; http://www.jollinger.com/photo/meters/meters/sei_photometer.html
EDIT2:
Now I found the site with the exact thing I'm trying to build, there's definitely a filter on top of that LED. http://www.huwswebthing.talktalk.net/seiled.htm
AI: There are many indicators of light quality which affect the use.
Intensity, to the eye in (milli)candela
density of light in luminous flux or lumens total, for all directions
Beamwidth in degrees to 50% intensity
x,y color coordinates referencing CIE 1931 standards for which neutral daylight white is 0.310,0.310
Correlated` Color Temperature (CCT) in degrees Kelvin[K] which is 4500 to 5000'K is preferred daylight and 3000'K is warm with a yellowish tint, while 6000'K is cool with bluish tint
color rendering index (CRI), which is a value up to 100, where white LEDs fit between 89 and 92.
most LEDs use same mix of phosphor but the delicate balance of thin layer determines if 10 - 20% is converted from narrow Blue to broader orange and red by secondary electron phosphor emission.
This consumes some of the blue energy to create the longer wavelengths.
However to the critical eye there are many subtle shades of offwhite that all have the same CCT of 3000'K due to the broad tolerance of phosphors.
However when I hear a spec that indicates colour filtered, I know the application needs to be reviewed to see if it is an "indicator" or an "illuminator" for some area.
So,which is it , and any idea how it is used?
I have been in business for 11 years specifiying custom LEDs for Autobaun and Swiss Tunnels in a wide variety of applications for Traffic and Emergency lighting.
Added
The luminance of the internal lamp is adjusted using the rheostat (N) until the electrical output of the photoelectric cell (H) matches the calibration mark on the microammeter (A). Turning the base of the instrument drives the two neutral density optical wedges to vary the luminance of the upper diffusion screen (F) image reflected by the mirror spot (C).
Good luck finding a bulb to match the old one. It must be identical in tungsten filament thickness and length and voltage and current vs radiated light. |
H: Making sense of NOR get output
I've been trying to figure out the various logic gates and am currently looking at a NOR gate as in the picture above. After reading various sites, I still don't get how this works as far is the output is concerned.
Based on this picture and what I know, A and B being what they are in the pic... A goes through the pMOS which gives an 1 since A is 0, and B goes through another pMOS which gives a 0 since B is 1... so you've got a 1 and a 0 going to the out put from that side, then on the other the nMOSs just take A and B and give what they are, so you have another 1 and 0 going to the out put from that end... What I don't see is how that translates to 0 for the output. I understand the meaning of NOR, that it's only true, 1, if both A and B are zero but I don't understand what's going on has far as the results of each input going through the MOSFETs to the output.
AI: If two PMOS's are active, (namely if they work) Vdd (Logic 1) could reach to Output. In case of any of them doesn't work or both of them don't work we will see Logic 0 in the Output.
PMOS works if you apply negative voltage (Logic 0) to gate terminal. So If you apply negative voltage (Logic 0) to both PMOS, you will see Logic 1 in the Output. |
H: How does the dish size increase reception range of radio signals?
For example if you want to transmit a song through FM, does the receiver's dish size actually increase the range in which he can detect these signals? if it does then how?
AI: A bigger dish is simply able to "catch" more of the power being sent through the air. A 100mm parabolic reflector could light a cigarette from sunlight. But it would take a 3m reflector to boil a liter of water.
As Tony Stewart cautioned, however, broadcast FM (down around 100MHz) is rather lower than most practical parabolic reflector dishes can operate at. Although there is a bit of reflection happening in many TV/FM antennas with those multiple rods in a long array, etc. |
H: Why can't stepper motors be powered off of 9V batteries?
According to Adafruit, "you can't run motors off of a 9V battery so don't waste your time/batteries!"
I've been searching around and it seems like most sources do use wall adapters, but without justifying why. What's the explanation for this?
AI: It isn't quite accurate to say that you "CAN'T" operate stepper motors from 9V batteries. They actually mean that most stepper motors draw so much current that they won't run very long from a 9V battery before draining it completely. Quite possibly in a matter of minutes. So it is an incredible waste of batteries to power something that needs that much power from a 9V battery. It has the LOWEST power density of ANY commonly available consumer battery. Even less than a AA cell. |
H: Frequency hopping of Si4463
I ask a question in this , but it has been closed, I just want to know how to realize a simple frequency hopping.
I use RH_RF24 to control SI4463, and success to send some simple data. But I know maybe there will be disturb . So, I want a way to avoid disturb.
First, whether there is a way that choose a best(clear) frequency in some range after scan all frequency in the range at beginning ? If have, what is the way ? If giving a detail example, it's really best.
Second, is there any simple and reliable way of frequency hopping ? Because I want use it in my model airplane, I hope it is anti-interference. About frequency hopping, I really know nothing.
Thanks for any answer.
AI: Most rf-modules allow you to measure the strength of the incoming signal. They often call this "receives signal strength indication" (RSSI). Look for that abbreviation in your datasheets. |
H: Are special antennas required for 802.11n connection?
I'm looking for a chip or a PCB antenna to add to a module. I would like if there's difference between antennas that make some suitable for 802.11n WiFi connection and some to 802.11b/g.
AI: No.
Antennas do have a bandwidth; for example, an FM Radio antenna will only efficiently cover around 80 – 110 MHz. If you wanted to listen to a signal that is broader than those 30 MHz, you'd need a different antenna.
802.11n does have 40 MHz channels instead of only the 20 MHz of a/g, but devices for WiFi don't just cover a single 20 MHz channel – they need to work on all WiFi channels, and thus, they are far, far broader from the start. |
H: Measuring isolated AC signals with a single ADC that has common ground inputs
In my setup I have a power transformer (XFMR6) and some load connected to it.
Need to build a device that takes samples of U, I at the input and output of the transformer.
The measuring networks have to be halvanically isolated from the current networks. I plan to use ACS712 for the current sensors and I have no problems with it.
To make voltage measurements I plan to use small transformers (XFMR3 and XFMR4) connected to ADC that has only single ended inputs with a common ground. Will there be problems if the two signals come in different phases? What if there will be no grounding?
AI: To make voltage measurements I plan to use small transformers (XFMR3
and XFMR4) connected to ADC that has only single ended inputs with a
common ground.
It's probably a good idea to use transformers to isolate what might be dangerous voltages. Make sure they are types that are safe to use i.e. are effectively double insulated or have an earthed screen.
You will need to bias the AC output to the mid-point of your ADC's input voltage range and take care to ensure you don't put excessive voltages on the ADC pins. This is usually achieved using components as simple as resistors.
Will there be problems if the two signals come in
different phases?
If you are only making voltage measurements then the phase doesn't matter but, if you plan to try and calculate power (by multiplying the output with the current sensor output) then phase is very important and needs to be correct.
What if there will be no grounding?
The secondary sides need to be connected to a common point on your ADC and, as said earlier, if the correct transformers are used, no grounding is necessary. |
H: 555 timer monostable operation not so stable
I'm experimenting with a bunch of Texas Instruments NE555P IC's that I bought recently from Amazon, I made a basic monostable operation circuit on a breadboard with the following schematics
simulate this circuit – Schematic created using CircuitLab
When I push the button the 555 gets triggered and the LED lights up for few seconds before going off, pushing it again would repeat the process. This is the expected behaviour indeed, however there are two issues I'm experiencing with this circuit.
The first issue is that closing the switch doesnt always trigger the timer, what happens instead is that the LED goes on for only a fracture of a second, I sometimes can overcome this by holding the button down for a second or so and it will then trigger normally after releasing the button.
The second issue is that the trigger duration is larger than \$ 1.1R_{1}C_{1} \$, I tried several different resistance values for \$R_{1}\$ and found out the duration to be about \$1.32R_{1}C_{1}\$ instead (in the circuit above the trigger lasted about 7.40 seconds where it supposted to be 6.16 seconds).
What's causing this erratic behaviour? and why the trigger duration is not consistent with \$1.1R_{1}C_{1}\$?
Heres the breadbord layout:
Last thing I would like to mention is that while testing a 555 IC got zapped (it stopped triggering all together and started heating up), what could have caused that? I used insulated tweezers to avoid ESD damage.
AI: Among experienced EEs it is a well known fact that electrolytic capacitors (like that 100 uF one) can have huge tolerances. These capacitors often have a 20 to 30 % higher value than their nominal value. As there caps are mostly used for supply decoupling that is usually irrelevant.
You are trying to make a (somewhat) precise timer with a timing of a few seconds. You cannot expect much precision from this circuit, the NE555 is not very well suited for longer timings. Most EEs would use a faster running clock and a counter, the CD4060 (14-stage ripple carry binary counter) is a candidate for that. You can make it monostable if you play with the reset.
To solve both circuit problems I would add a small (10 nF) capacitor in parallel with R2, this will force the Trigger to be slightly longer when the button is pushed. You could try a different combination of R1, C1 like 1 Mohm and 5.6 uF. That way the capacitor is smaller making it easier for the discharge transistor in the NE555 to discharge it. |
H: I2C sensor not detected on Raspi3
I am trying to set up a pH sensor from Atlas Scientific using a Raspi3. I am using this Tutorial to set it up. I have set the sensor to I2C as the LED is blue. The circuit is shown here. I am now trying to run the i2cdetect -y 1 command, but when it prints out all the I2C ports are - instead of having 63 at its position. I think that I have connected my cables connected since they look the same as in the tutorial.
Any idea as to why the I2C sensor is not detected?
Thanks.
AI: RX and TX are receiving and transmitting lines for UART communication. I2C lines are labeled SDA (Serial DAta) and SCL (Serial CLock).
Your sensor seems to be capable of UART and I2C communication, the first one being the default.
Following the instructions on the site you linked, you switched the sensor to I2C mode, and the meaning of the IO pins changes:
TX -> SDA
RX -> SCL
Your Raspberry Pi has UART as well as I2C on different IO pins.
But you connected the I2C sensor to the UART Port.
The I2C port is at about the same position in the lower row of the blue adapter board.
For UART communication, connect
Sensor TX <-> RPI RX
Sensor RX <-> RPI TX
And for I2C connect
Sensor TX (SDA) <-> RPI SDA
Sensor RX (SCL) <-> RPI SCL |
H: LED matrix design
I am looking to make 16(columns) x 14(rows) LED matrix. Cathodes are connected to the columns and anodes are connected to the rows.
I want to drive rows(anodes) with I/O input expander MCP23017 which is controller via I2C. I am not using shift register because I need GPIO pins from microcontroller for other purposes. MCP23017 has 16 IO pins, which 14 I would use as outputs for LED matrix.
After MCP I would connect TLC59213 (LED) source driver. Interesting about TLC59213 is that it's connected to D flip-flop which requires rising clock signal in order to update signals. I am not sure how would I approach this. I need constant updates since this is LED matrix. Should I write small function in code which would create small pulse and connect one IO pin to TLC59213 clock pin? Then call that function all the time. Or should I create small circuit(maybe with 555 or such) which would constantly make pulses and connect it to the clock pin? Which of those is the best(best here means that update is quick anough you can not notice just by looking at it)? Is there a third, better, option?
On columns side(cathodes), I want to put ULN2003 sink driver just at the beginning of the column. Finally, there is TLC5940 LED PWM sink driver. I am putting ULN2003 because TLC is able to sink only 120mA. Taking in account that all LED of one column might be on at certain moment(16 x 15mA = 240mA), I added ULN2003.
Here I am having problems understanding whether do I need ULN2003 or not. Maybe is TLC5940 capacble of sinking short bursts of current so it won't harm him?
Also, the system is powered with 5V, while LED voltage is 3.3V. If all 16 LED would lit, it would need 240mA. According to Ohm law: R = U / I = (5 - 3.3) / 0.24 = 7ohm. So 7 ohm resistor will drop 1.7V when current is 240mA. But when only one LED is lit, resistor drops only U = I * R = 0.02 * 7 = 0.14V, so LED needs to drop 4.86V. Not so good! How should I approach this since the current is constantly changable?
EDIT:
After receiving some answer and analysing again, I choose to replace TLC59213 with just regulator PMOS and ULN2003 with NMOS. Simpler, wide available, even cheaper. Resistor problem stays still.
AI: Several suggestions here.
For the clocking of the TLC59213 use one of the output bits of the I2C port expander that is unused. Each time you output command on I2C bus to update the 14 bits of row data simply set that extra bit to '0'. Immediately follow that with a second I2C command with the same row data but with the extra bit set to '1'. This provides the "clock" for you which is only needed when the row data changes.
For the issue with the resistor and varying voltage drop the easiest solution is to put a suitable series resistor for each LED in the matrix. Have the row drivers just source current without current limiting and the column drivers sink current without current limiting. This way each LED has its own current limiting and you eliminate the problem that you cited.
You mention running this all on a 5V supply. You may want to reconsider the supply voltage if you intend to keep using the TLC59213 for a row driver. It's outputs will drop as much as 2V or more when sourcing a lot of current due to the Darlington type output structure in the device. Be advised that discrete PNP or P-FET transistors may be a better choice. Or search out arrays of such component in a single package if you want to minimize discretes. But in the long run discretes may even be cheaper. The use of P-FETs would be advantageous because they can be used without additional resistors like would be required around PNP transistors. The discrete components should be able to provide operation as a saturated driver to the row with very little voltage drop from the power supply rail.
You should discard the use of the TLC5940 part. It is designed with adjustable current sink capabilities which have no applicability to matrix configured LEDs. I would recommend that you simply consider replacing that part with another of the MCP23017 port expanders to be the drive controls for the LED columns.
You may use the UL2003 as the current sinks for the columns of the LED matrix but be advised that these are Darlington type parts and will have a volt or more of drop across the output when they are sinking a lot of current. This will eat significantly into the 5V supply budget you have for the LED matrix. Be advised it may make sense to use discrete NPN or N-MOS transistors as the actual current sinks. The use of N-MOS FETs can be very convenient because they can be applied in this circuit without need for bias resistors as would be needed for NPN bipolar transistors. The saturated discrete drivers will have very low voltage drop. |
H: How to calculate amp hours when voltage and amperage changes
I have a Lithium battery pack. All I know about this battery is that its no-load voltage is exactly 20.0v when charged. I then connect a load to this battery; I have no direct information about the load, however, the load is connected through a power tester circuit which shows the following information (recorded at the specified time intervals below):
Time (initial): 18.9v, 5.6A
Time (after 1 hr): 16.2v, 5.2A
Time (after another 16 minutes, or at 1.27 hr): 12.0v, 4.4A
QUESTIONS:
What is the AH rating of this battery?
Is a voltage drop of 1.1V significant?
How many watts is the load drawing?
Thanks for your responses.
AI: Without knowing your pack model and chemistry, we have to make certain assumptions.
The 1C rate is std Ah rating 20hr constant current,CC down to 3V/cell.
This may be good for 500 charge cycles and good Ah rating, but
you are using battery life quickly at 20 times this rate or 20C
Both Ah rating and charge cycle life decrease with faster rates.
initial voltage drops quick at first with any load
then drop depends on the internal resistance or effective series res., ESR
ESR = ΔV / ΔA = Ω [Ohms] or differential slope of voltage and current
given;
Voc=20V
Time (initial): 18.9v, 5.6A
Time (after 1 hr): 16.2v, 5.2A
Time (after another 16 minutes, or at 1.27 hr): 12.0v, 4.4A
Std. LiPo Voltage ( with some load) =3.8V so 20/3.8V = 5.26 cells
rounding down we assume it is 5 in series or 5S
now we can normalize or curve-fit your performance with others
Voc/5 = 4V
V(t0) = 18.9/5 = 3.78V @5.6A, P(t0) = 106 watts (18.9V*5.6A)
V(1h) = 16.2/5 = 3.24V @5.2A, P(1h) = 85 watts (16.2V*5.2A)
V(1.27h) =12.0/5 = 2.40V @4.4A, P(1.24h) = 29 watts (12*2.4)
any less than 3V per cell accelerates cell death
it seems to fit a profile of <20C after 1 hr or slightly above blue curve.
after 1h your V=3.24V/cell so from the curve your capacity remaining was ~5%
using power reduced to 26%, initial watts, you got an extra 27% time.
Amp-hour [Ah] capacity "delivered" is simply the sum of current every
minute averaged over time and minutes converted to hours.
This will be less than default 1C rating.
Watt-hour [Wh] capacity "delivered is Ah times Volts [V*Ah=Wh] every minute averaged and converted to hours.
Never go beyond knee of the curve and if you buy more packs and use only to 50 % discharge, and never go above 4.1V during charge, you can expect up to 5x the number of life cycles but less Wh capacity in each cycle.
Now you do the math. |
H: Italic or upgright font for circuit components
Should one italicize symbols used to represent electronic components in a circuit diagram?
Some context:
According to NIST, typefaces for symbols should be italic if they represent a quantity or variable. The problem I see is that sometimes this is the case and sometimes it isn't.
For example, a resistor could be represented as "R1" and represents a quantity, namely the resistance of that particular resistor and hence, it should be italicized. In an accompanying text you could write "R1 = 1 kOhm" However, a transistor could be represented as Q1, but does not represent a single quantity, but rather the type of component used or multiple quantities. The quantities associated with it could be printed alongside it, such as "W/L = 360/2000 nm" in case of a MOSFET or "Beta = 400 in case of a BJT". Hence, I feel that it shouldn't be italicized, since you wouldn't write "M1 = 360/2000 nm". But I also feel that it is inconsistent to italicize some types of components and not other types.
So in summary, I can think of 3 ways to do it: (feel free to suggest other ways)
Italicize everything -- Problem: "wrong" style for transistors
Italicize nothing -- Problem: "wrong" style for resistors
Only italicize the components that can be represented by a single quantity -- Problem: inconsistent style between components
So which is the "correct" way to do it for example in scientific publications? (or in any situation)
AI: These rules, ISO 31-13:1992 apply to scientific papers, journals, datasheets and books.
"Schematic or Logic or circuit " diagrams detailed rules , and other document types used in electrotechnology, are provided in the international standard IEC 61082-1:2015 and do not use the same rules.
Learning to adopt standards makes documents look more professional and readable by others, but there are many styles and many existing standards for different document content types. |
H: LDO selection based on power dissipation
Perhaps You could help me confirm my thermal discussion on choosing appropriate LDO?
MIC5225, a 16Vin and 150mA output current quaranteed LDO.
So, parameter-wise I would guess that I could use this SOT23-5 package and get it to run from f.i 14Vin to 3.3Vout and 100mA output.
I checked some relevant Q/As about the 'LDO power dissipation' topics:
(can't provide links, since not enough rep)
"LDO selection in Thermal point of view"
"SOT-223 Thermal Pad and Vias" (many references also)
According to a potentiometer model [http://www.ti.com/lit/an/slva118a/slva118a.pdf], the power dissipated is equivalent to the input and output voltage difference and output current. So
(14Vin - 3.3Vout) * 100mAout = 1.1W. (getting suspicious)
Based on my readings, SOT23-5 (with 235C/W of thermal ambient resistance) can't handle it with such input voltage.
(125C - 25C)/235[C/W] = 0.426 W (So when ambient is 25C, the package could dissipate 426mW) no-airflow
I conclude that it's false to expect the regulator to work up to 16Vin and 3V3, 150mA, even though it's advertised as such (Couldn't find any other package for this either). Why false advertise xD?
I then chose LD1117S33TR with SOT-223 which should manage 1.1W with some heat sinking. Space is also quite scarce. Package thermal resistance is ~110C/W. So perhaps package-only could dissipate:
(125C-25C)/110[C/W] = 0.91W
and with some heatsinking on PCB and airflow it should be fine?
I wonder if this train-of-thought is correct?
AI: Your reasoning is fine, and the calculations are correct. You're perfectly right. And 1.1W can be dissipated by a SOT-223 package, provided that you have sufficient copper area under the package. There are some resources on the internet giving hints on the minimum copper area you need for a given power with this package:
TI AN-1028
mbedded.ninja
The only thing that needs clarification is that it's not false advertising. MIC5225 can handle 16V input. It can also handle 150mA. The thing is: you can't have both at the same time with this package, indeed. Either you use it with high input voltage but low current (some applications just need a few tens of mA), or, if you need higher current, you need to have lower input voltage.
Note that you can probably have both conditions true for a very short time, however.
A trick that you can use with linear regulators in such a situation is to use a low value, high-power resistor in series with the input. This resistor will dissipate most of the heat when the current raises. You just have to calculate its value so that the voltage drop across this resistor, when the current is at its maximum value, is lower than the difference between your input voltage and the minimum required voltage at the regulator input (use ohm's law). The only drawback would be slightly worse load regulation. |
H: High side regulation of current for proportional valve
I'm working with a circuit which is regulating the current over a generic, proportional solenoid valve (L1 in the schematic below). The characteristics of the valve coil is unknown/will vary and the task of the application is to compensate for resistance/inductance variations in the coil caused by temperature changes etc, by means of a PID regulator. Currents are roughly between 200mA to 1A.
We are using high side current sense and a high side driver to achieve this. The current is obtained from a 5V 500Hz PWM signal, which controls a "smart" n-channel high side driver MOSFET. On-resistance shouldn't matter much since the task for the circuit is to compensate for resistance.
The driver uses a raw, non-regulated 24VDC from a vehicle to control the valve. The free wheel diode used is this one.
The high side current is measured over R1 with a high side current sense amp which gives its output as a current between 0 and 1mA. This current is in turn measured over the resistor R4 by the microcontroller, so that 1mA equals the maximum ADC value. (For various reasons ADC ref has to be lower than 5V - in this case it is obtained from an accurate voltage reference.)
simulate this circuit – Schematic created using CircuitLab
The PID regulator is implemented in software and works as it should, compensating for changes in resistance. It does however assume that the inputs and outputs are linear, so the current is only measured once per period, at a fixed point. We've tested that this works by connecting resistors in series with the valve. Everything works fine as long as the valve supply is kept constant.
However, when we change the supply voltage, the regulator will attempt to compensate, but we still notice a non-linear pattern in the output current, much higher than the <1% than can be expected from the LT6106. It can vary as much as 10-20% between 20V and 30V supply.
After much investigation, we came to the conclusion that this is because of some non-linear phenomenon in the coil. At the high-side, the PWM always looks pretty much like a digital square wave, so there's not much to tell from that.
But on the low-side, the curve looks very different depending on supply voltage. We managed to measure this by adding a shunt resistor on the low side, it looks something like this:
22VDC supply
30VDC supply
The above pictures are for the same output, but with the regulator trying to compensate for the change of current caused by the change of supply voltage, hence the different duty cycles.
I'm a software guy, so by no means an expert at electronics, let alone at magnetic fields in coils, please bear with me.
Q1: Is this a known phenomenon and is there a formula I can use in software to compensate for the non-linearity? It is possible for the MCU to measure the supply voltage if necessary.
Since the current is measured on the high-side, everything looks fine there. I can calculate peak or average current, but that's of little use, since apparently the square wave looks nothing like the actual current flowing through the coil.
Q2: How much, if any, impact does the free wheel diode have on the output? Can I change the curve by picking another diode with a different forward voltage, adding series resistors etc?
General feedback on the design is also welcome - I know that the driver IC is obsolete.
AI: The flyback current should be inside the current sense loop. This is a legitimate part of the solenoid current. Not measuring it gives bad input to the controller.
Another source of non-linearity with respect to the supply voltage is due to the D1 voltage drop. This fixed voltage introduces a non-linear element since it doesn't scale with supply voltage. At higher supply voltage, the PWM off time will be less, so the diode on more, making even a larger difference.
D1 should be a Schottky diode. These have about half the voltage drop of a normal silicon diode. The non-linearity will still be there, but less prominent.
The FET switch makes no sense as shown. You don't want to drive the coil from a source follower, as you are doing now. The coil common mode voltage can float arbitrarily, so it makes more sense to use a low side switch between the coil and ground. This gives you real PWM, with the full supply voltage across the coil when the switch is on.
A little feed-forward would probably help. Have the control loop compute a normalized PWM at a fixed supply voltage. Then do the divide using the actual measured supply voltage to determine the actual PWM duty cycle to load into the hardware. This removes supply voltage compensation from the control loop, which then sees a more consistent plant response. The control loop then just handles the little non-linearity that making the duty cycle inversely proportional to the supply voltage doesn't take care of.
I did this with a proportional valve once that had to run from 8 to 20 V supply and it worked very well.
500 Hz seems very low for the PWM frequency. You want the solenoid current to not decay much during the PWM off time. Think of the solenoid current as the average DC with AC PWM ripple on it. Only the former moves the solenoid. The latter just causes wasted power and heating, and adds noise to your current measurement.
The R3 and C1 filter don't make sense considering the 500 Hz PWM frequency you are using. You want to measure the average solenoid current, so the PWM frequency should be strongly attenuated. Your filter rolls off at 4.1 kHz, so isn't going to do much of anything to reduce the 500 Hz PWM ripple. A higher PWM frequency also helps with this.
The only way that what you have would be acceptable is if you're sampling the current at well beyond 1 kHz rate and then doing significant filtering in the firmware. 1 kHz is the absolute minimum just to keep the PWM fundamental frequency from aliasing. However, the PWM signal will have significant harmonics, which is why you would need to sample significantly higher than 1 kHz to get a reasonable picture of the current.
Again, use much faster PWM and much lower filter rolloff. Switching to a Schottky diode as already noted reduces the magnitude of the PWM frequency in the current signal. Even with all that, you may still need to sample at high speed, then digitally filter and decimate. |
H: Is a load needed to measure the voltage drop of a long cable?
Should be a simple yes or no question.
If I attach 20ft of 16AWG cable to a power supply, in theory I should see some voltage drop. Do I need to place a load at the end of the cable to read the actual drop or can I just connect my multimeter to read the end and take that as the voltage available.
AI: To expand on the comments, a long wire obeys Ohms law, just like any other resistance, so V = IR.
With no load, no current flows. I = 0, so V = 0 and there's no voltage drop.
If you add a load, then a current flows, and the voltage drop becomes non-zero and proportional to the current. It's possible to calculate the drop if you know the ohms-per-metre of the cable and the expected current. Don't forget the cable goes both ways, so the drop is double that of one length of cable.
On AC, you might see a small voltage drop in a long enough cable because there will be some capacitive leakage between the cores, and hence a small current. |
H: Coil switching for resonance with an arduino controlled TRIAC
I have a capacitive system which I need to power via resonnance using a coil. Resonance occurs at several frequencies, therefore I need a different coil for each frequency. So far I've manually switched the coil whenever I needed to use the other resonance frequency. However I feel like this process can be automated, so I've designed a shield for my arduino that allows me to do just that, to switch coils using a button or later an automated command coming from the computer to the arduino's serial. (The arduino here is an UNO).
Since it's using alternative current, I needed to use a TRIAC. The design I've come up with so far looks like this
It works this way : The Signal + connector at the top goes into a function generator with an amplifier which outpus an AC voltage which can range from 5V to 20V in higher voltage uses. The Signal - connector at the bottom goes to my capacitive circuit, which itself is connected to a shunt resistor and eventually back to the function generator.
On the shield, as can be seen is a small 2 poles connector onto which the coil is plugged. There's another coil-switcher right next to this one as I'll be using 2 coils on this system but that I haven't shown as it's exactly the same as the first one. There's a small opto-TRIAC (the MOC3031M) to separate the arduino from the rest and to trigger the actual TRIAC. I drive the opto-TRIAC using a simple MOS from the arduino.
My question now is the following : Is that a good way to control such a system?
What I really only want to do is to be able to select between one coil or the other without having to physically change the component.
AI: One problem with this circuit is to turn off the TRIAC after it has been turned on. To turn off the TRIAC, the voltage between MT1 and MT2 needs to be zero. This circuit has the voltage from the amplifier, which only goes down to 5 V, and whatever voltage is left on the capacitor.
I would use an analog switch instead. |
H: Deriving 5 V and 3.3 V from 12 V using single IC
I am making a college project which requires me to use a 5 V LCD display and a 3.3 V microcontroller.
I am using a 12 V power adapter to power to complete circuit.
I am using LM7805 and LM3940 to generate 5 V, and 3.3V, respectively. Is there any IC that takes in 12 V and spits out 2 or more voltage levels like 3.3 V, 5 V, 9 V etc?
If such ICs exist then what is the IC classified as? (like 7805 is a linear voltage regulator).
The IC I require should do something like this:-
AI: Using a linear regulator of any kind to get 5V from 12V is very inefficient. 3.3V is even worse. To get 0.15A at 5V (0.75W) will waste more than 1W in the regulator and will likely require at least a small heatsink. You could consider using a switching regulator.
One method that is sometimes useful is to derive 5V from the 12V with a switching regulator (or use a 5V source to begin with) and then derive the 3.3 from the 5V with a linear regulator. This works best if the current from the 5V is relatively high (for example for an LCD with backlight which might require 100mA) and the current from the 3.3V supply is relatively low (for example a typical 8-bit micro which might only require 10mA). In that case, an SOT-23 regulator could probably be used. |
H: Y to Delta transform with inductors?
Does a Y to delta transform for resistors work the same way when inductors a connected in a similar fashion?
simulate this circuit – Schematic created using CircuitLab
I have a load bank that is wired in a Y configuration. I'm given the specks for loading in a delta configuration. I'm trying to figure out if these loads will work.
AI: Y to Δ conversion formula remains the same for all impedances (\$Z\$). In case the impedance is just a resistor, replace \$Z\$ with \$R\$. In the case of an inductor, replace with \$Z=j L \omega\$. In the case of a capacitor, it must be \$Z=1/(j C \omega)\$. In case of combinations of these elements, add them and replace \$Z\$. |
H: Anti-static home setup
As suggested in an answer to an earlier question of mine, bonding to a laptop can be done by connecting an anti-static wrist strap to the ground of an USB-port. (After disconnecting the AC-to-DC adapter.)
At which locations should there be a 1E6 ohm resistor? My quess: at A and B.
AI: The purpose of the 1M resistor is to protect you from being wired to a solid earth, in the event of another piece of equipment having a live to chassis fault.
Point A will protect you, if you are grounded nowhere else.
Point C would be good as well, as it protects you even if you are touching the (fairly conductive) mat.
The laptop ought to be isolated by its PSU, but do you trust those cheap small hot running things? Put another one at B, they're only a penny each.
Basically one in every lead you own or use will offer you the best protection against shock, however you configure the leads, and will not degrade the protection against ESD. |
H: Software for drawing circuit schematics and SPICE simulations on Mac?
Problem:
I'm trying to find a robust, user friendly, schematic design and SPICE simulation software that will run on a Mac. I have used Multisim for design and simulation in the past.
I'm wondering if there are any equivalent programs out there that I'm not aware of.
Thanks in advance
AI: LTSpice is now available as a native Mac application.
The schematic are not as pretty as other applications but the simulation works well. |
H: Does the diameter of the coil in a motor determine the amount of volts needed to run it?
I am making a DC motor and i got coil from a transformer and the coil is a lot thicker than the coil in your usual DC motor, i want to ensure that the motor runs on 9 volts, the coil is about 1 mm in diameter.
So my question is:
Will it run on more than 9 volts?
AI: Does the diameter of the coil in a motor determine the amount of volts needed to run it?
No. It's a mechanical design consideration; indirectly, it does affect field strength, but that only indirectly links with voltage...
So, Motor design is a complex topic full of tradeoffs.
There's literally dozens of different kinds of eletrical motor designs, and every one of them has a different relation between coil diameter, wire resistance/diameter, coil currents, forces and moments, speed and voltages.
I'm afraid you'll have to dig through a bit of literature on your specific motor type if you need info on how to build a motor to operate at a certain voltage giving you certain operational characteristics (current, torque, speed, reluctance, temperature…). |
H: pulse width modulation versus amplification
If I want to make a sine-wave inverter where the output AC voltage is lower than the input DC input voltage, how can I see that using pulse-width modulation will be more efficient that just using a transistor as an amplifier if I have a sinusoidal gate driving signal at the desired frequency?
I've heard that transistors generally operate efficiently when "on" or "off", but not in an intermediate regime. My impression is that this is true for both BJTs and FETs, but I'm not sure. This rule of thumb is consistent with other things I've learned, like the fact that CMOS integrated circuits tended to be lower power than equivalent TTL chips, and also the fact that switched mode power supplies are efficient and popular. I've never really challenged this idea before (that switching a MOSFET with a given duty cycle is more efficient than using it as an amplifier). I spent about an hour trying to fact check it earlier and didn't manage to get through it.
What I've done so far: Firstly, I decided to focus on MOSFETs. Secondly, as an example I decided to look at a specific n-channel MOSFET, a Toshiba K3767, because I have a LD7550-based switched mode power supply that uses a K3767 as its power transistor. Thirdly, a bit of internet reading tells me the two main losses will be switching losses and conduction losses. So I guess I could do two calculations, one where I switch with a square wave at a high frequency, like 65 kHz as described in the LD7550 datasheet, and another scenario where I drive the K3767 with a sine wave at a low frequency like 60 Hz.
Am I on the right track here? Is there some really obvious answer, like I2R losses will be huge if I use the MOSFET as an amplifier with a sine wave on the gate?
How can I show that rapidly switching a transistor between on and off at a given duty cycle is more efficient than operating it as an amplifier to achieve the same average output?
AI: The power dissipated in the transistor is P=IV where I is the current through the transistor and V is the voltage across the transistor.
When the transistor is fully on V is small and therefore P is small.
When the transistor is fully off I is very small and therefore P is very small
When the transistor is partially on both V and I are significant and so P is much larger.
Note though that to make a PWM based system efficient you need the right kind of filter, you don't want resistors in your filter because they waste power, so you would normally use a LC filter. You also need to make sure that when the transistor is off the inductor has a path to discharge, either by a diode or a second transistor. |
H: Would this circuit work as a binary calculator?
Forgive me if this is a stupid question with a poorly drawn circuit diagram. I've never really made anything electrical before, so this is all very new to me.
I've recently been wondering how transistors can be used to add numbers together, and learned that it is done using logic gates. I thought I'd try to understand how this could be done, by drawing a circuit diagram for the half adder of an ALU. The below image is what I came up with...
For those unaware of how binary addition (for a single digit) works, the intention of this circuit is for neither LED to turn on if neither switch is closed, for only the top LED to turn on if only 1 switch is closed, and for only the bottom LED to turn on if both switches are closed.
This can be represented by the following truth table:
Would this circuit work correctly, in the way I have described? If not, how can I change it so that it would work?
AI: Since you have no carry-in to worry about, what you need here is just a half-adder. The LED for the less significant bit will connect to its normal output, and the LED for the more significant bit will connect to its "carry".
Using normal logic gates, a half adder consists of an XOR to produce the normal output, and an AND gate to produce the carry:
simulate this circuit – Schematic created using CircuitLab
Now, that leaves only building an XOR and AND gates. If you're building this out of discrete parts so you don't care a lot about performance, and do care about simplicity, you might want to use DTL. An extremely simplified DTL AND gate might look something like this:
simulate this circuit
Note: in practical use, you rarely want to use DTL, because it's pretty slow. Also note that this excludes a couple of resistors that are normally included for level shifting to ensure that the output of one gate can drive the input of another gate. Since you're never cascading gates in this particular case, you can probably get away without that, but if you look up what a DTL AND gate looks like, chances are it'll be more complex than what I've shown here.
I'll leave the XOR for you, but the basic idea is pretty simple: start by designing at the logical level, then design the individual gates (or just use some pre-built logic ICs, of course).
Making things work tends to work in the reverse direction: once you've designed a gate, make it work in isolation. When you have all your gates working individually, connect them together and get a larger circuit to work. Repeat as needed until your whole circuit works. |
H: LF-RFID demodulation
I'm trying to read a LF (low frequency) RFID transponder, it works in the following manner: It uses load modulation, the receiver (the base station) generates the carrier and the transponder modulates it using magnetic coupling (like a electrical transformer), the transponder (or tag) being read switches a load (resistance) and it changes the carrier's amplitude like can be seen schematic ilustrated below (but it is only an illustration, the circuit is a bit more complex).
simulate this circuit – Schematic created using CircuitLab
The generated signal was obtained by an oscilloscope and is presented in the following explicative picture:
The information being modulated, as can be seen, is very weak compared to carrier's amplitude. The carrier frequency is 125kHz and the base band signal has a 20kHz bandwidth.
I assume that amplitude demodulation would be a better approach to solve the problem. At first, I tried to apply the sampling and filtering to obtain the modulating signal from transponder. I tried to sample at different rates, 250kHz, 500kHz and 900kHz, I designed the low pass filter using windowed sinc with window
function being Kaiser-Bessel, -3dB frequency of 30kHz and I tried to use different number of taps (coefficients), 8, 12, 16 and 32.
But, the sample rate and the number of taps almost made no big difference.
I read a suggestion to perform downconversion. So, I tried to perform downconversion by sampling based on fact of replicas, in frequency domain, being generated by sampling. It is illustrated in pictures "a" to "c" below.
The carrier frequency (Fc) is 125kHz, so I used 62.5 kHz sampling frequency, this is Fc/2. But it doesn't work --- when sampling is performed at 62.5 kHz, a kind of bouncing at the signal appears that is not present when I perform sampling at higher sample rates, as can be seen in the figure above.
I do not know why this bouncing appears, I know what aliasing is, but I thought that the aliasing would only appear in the analysis of the data collected by the A/D converter, but the picture was obtained by an oscilloscope. In other words, I thought that the data collected by oscilloscope would not be affected by the sample rate of the A/D converter of another circuit that has nothing to do with the ocilloscope.
Anyway, this bouncing is affecting the filtered signal and the filter doesn't remove it. So, I gave up to perform downconversion via sampling and tried to perform downconversion by another way. I know that I can achieve this by multiplying the modulated signal by a generated wave with the same frequency and phase. This way, there will be a replica near DC, so I would perform filtering at a lower frequency.
But again I'm having problems, I'm using a dsPIC33EP, it's sampling rate
can reach up to 1MHz and the DSP core can reach up to 70 MIPS. But it doesn't have a mixerfrequency or a PLL, so I can't generate a sine wave with the same frequency and phase of the carrier to multiply by the signal in order to allow to perform downsampling.
I saw some ways to generate a tuned sine wave to perform using Taylor series by using look up tables. But what caught my attention was a trick that I found in another forum, there they use a trick, and instead to generate all the sine wave, they only uses 4 points {1, 0, -1, 0} to perform downsampling to an intermediate frequency. They call this trick of : "DSP Trick: Complex Downconverters for Signals at Fs/4 or 3Fs/4" it is briefly explained at dspguru.com .
Anyway, a question arises, because, to perform downconversion this way I need to multiply the modulated signal by a generated signal or by the 4 values representing the points of the generated signal (as explained in the page referenced by the link above). And, only after that ( the multiplication ), perform sampling. How can I sample after multiplication, since I need to sample the modulated signal before in order to perform multiplication?
I do not know what to do anymore. Any help will be very appreciated.
AI: Your root problem is likely "First, I tried to apply the sampling and filtering to obtain the modulating signal."
Your first step to obtain the modulating signal should be envelope detection.
You should also be aware that your ratio of carrier frequency to data rate (125k/20k) is fairly low. This is a key issue because the resonant circuits in RFID often have narrow bandwidth. (Due to their high Q-factor)
So check that your resonant circuits (TX and RX) have sufficient bandwidth to pass your data signals without too much distortion! You can do that by modulating switch SW1 in your diagram, increasing from a low rate and then noticing where the received signal power is down to 1/2 (or amplitude down by 1/1.414)
If at all possible, start with a very low data rate, e.g. 1k bits/s. That will make your life a lot easier and quickly provide valuable insight. Then start tweaking up the data rate! |
H: Fixing electrical controller (with joystick) on Pride Quantum 600 power wheelchair
The momentary switch is bad on a Pride Quantum 600 power wheelchair. The picture is below. It turns on, but you have to hold the rubber button/switch in the correct spot with the right temperature before it will turn on. What could be the problem or cheapest way to fix this? The part to replace the entire controller (what you see in the picture is $1750). The part number says J6, but the Jazzy and Quantum use the same controller.
Pride J6 Joystick Controller with Flying Leads (Part #: ELEASMB5009)
AI: After a lot of investigation and internet searching, I came across this YouTube video, which was very helpful in understanding what was inside.
https://www.youtube.com/watch?v=5UYn138cKCI
This did not solve the issue, but it helped me understand what to look for. In the video, he mentions you need a T10 security torx screwdriver. It actually requires a T10 torx (normal) screwdriver before removing the shell of the controller.
To solve the issue, I ended up buying a new controller keypad on Ebay for $33.90. MAKE SURE TO UNPLUG THE WIRES IN THE BACK OF THE WHEELCHAIR BEFORE REPLACING.
Pride part # RECPART1061
Merits part # P75736
Shoprider part # P75736
Keypad for 6 Key Button VSI Joystick Controller Pride, Merits,
Shoprider. NEW
As you can see, the on/off switch was worn down to the nub! See top switch/button in the first image.
Finished product!
If you need to buy a new controller, purchase a used one with the following name, not the part number shown in the question, as suggested by my durable medical provider. Pride (manufacturer) will not let you ask questions unless you have an account with them, so you have to trust the staff at a distributer, who in my case were wrong, and would have charged me for the copay of a $1750 controller, which was the wrong part number. The serial number of the chair did not give them the correct part number, so never trust that. Always look at the number on the actual component!
Correct part name:
6 Key 50 Amp VSI Joystick Controller with Flying Leads
Here's some detail if the keypad is bad, and you need to replace the entire controller.
Pride Mobility: CTLDC1419
PG Drives Technology: D50693.01
Compatible Models :
Jazzy 600
Jazzy 610
Jazzy 1103
Jazzy 1103 Ultra
Jazzy 1143 Ultra
Jazzy J6
Pride J6 |
H: What is the rating current of zener 1N702
I'm trying to solve a problem which requires the rating current of this diode 1N702. In the book, the author uses 1N961 zener for the examples and states that the rating current \$I_z = 12.5\$mA. In the exercises, however, the author requires 1N702. The question is
Design a 7.5-V standard voltage source using a zener diode if the
supply voltage is 20V. Use a 1N702 diode.
The solution is
$$
R = \frac{V_{dc}-Vz}{I_z}
$$
I can also determine the dissipating power but I need the current first.
AI: The 1N702 was a 400mW zener diode. Izt is 5mA for 1N702~1N707.
This is a particularly horrible example, by the way. A 2.6V zener operated at 5mA will make a really lousy regulator.
Distinguish between Izt (the current at which the zener voltage is guaranteed) and the maximum zener current, which will be limited by power dissipation. It will be a bit hard to estimate for the 1N702 because the voltage will be much higher than 2.6V at the maximum current, so the latter will be well under the 80mA you might expect were the voltage to be constant with changing current.
P.S. I really have to wonder whether the part number is a typo and they really intended to specify a 1N711 7.5V zener, which would make a lot more sense. |
H: How to find the resistor using the equivalent resistance?
How would I simplify this circuit so I can find the missing resitors using equivalent resistance?
AI: Break the circuit into simpler components. As an initial step, replace the 3 R's in parallel with R/3 resistor. Re-draw it and then simplify and then proceed in similar fashion. |
H: Why the current in X ray tube is in milli amps
If we see a X ray generating tube the voltage applying to tube is in kV, but the tube current is in milliamps. Why this current is in mAs not in amps?
AI: An X-ray tube might require 100kV. A current of only 1A would be 100kW, which is a fairly large amount of power even for a water cooled tube with rotating target.
The efficiency using a tungsten target would be less than 1%, so most of the power goes into heat.
There's no need to produce more X-rays than required, however CT scans may require something of the order of a 100kW power input. They also can expose the patient to a relatively large amount of radiation.
Along similar lines, a particle accelerator beam of just a few tens of nA can carry a great deal of power if the particles are energetic enough (say hundreds of MeV). |
H: Why are there sensors that encode their readings as serial sequence of PWMs?
This is a comment of @CuriousCat, but I think it deserves the attention of an actual question:
The sensor used in the original question outputs a serial PWM-style signal like this:
What's the advantage for a sensor to encode its output in this way rather than the more conventional 4 to 20 mA (I'm used to seeing process sensors use this all the time) converted to digital at the receiving end?
Is this more accurate?
Alternatively why not go HART / PROFIBUS etc. ?
Follow-up questions moved to separate question: Are there standards for digital sensor links?
AI: the more conventional 4 to 20 mA
More conventional is a very relative term, and it seems you might be coming from a process control background, where sensor signals are often processed and converted to a current internally in the sensor. Let me assure you that it's not the most common thing in the world.
What's the advantage […] rather than […] converted to digital at the receiving end?
Well, having an ADC in the microcontroller is a luxury. The sensor in question has a 14 bit output; finding a microcontroller with a 14 bit ADC will increase your material cost.
Is this more accurate?
Yes. 14 bit means \$2^{14}\$ possible values. Let's say the voltage signal would have a full amplitude range of 0 V to 5 V. In that case, your voltage step would be \$5\cdot2^{-14}\,\text V\approx0.3\,\text{mV}\$. That's very little! Interference, temperature variations and noise in your ADC will be a multiple of that, unless you can very closely control a lot of things, which will make your system very, very complex and expensive.
So: Whenever you need digital values at the end, convert analog to digital as early as technically feasible.
Alternatively why not go HART / PROFIBUS etc. ?
Because buses like these are really complicated to implement on both the sensor and the controller, and if you're really just attaching the sensor to a microcontroller on the same PCB, why make things complicated, and costly?
The four-states state machine I showed that is able to receive this kind of signal might seem complicated to someone not overly used to embedded development, but imagine you'd have to write a full PROFIBUS endpoint. Good luck! |
H: KiCad pin name to net name
Is there a method in KiCad to quickly map a pin name to a net label?
Example.
On the picture below, I want to name the wires with the pin names, as has already been done with GND. I'm currently doing this manually for each pin and am wondering if there is a smarter way.
AI: There is a patch floating around for that, but it's not finished, so it is doubtful it will be included in version 5.
Normally, I just label those pins where I'm not going to draw a wire, then I attach the label directly to the pin.
For GND, just attach the GND power symbol, which is a lot cleaner than actually connecting everything with wires -- same for the supply voltages. All power symbols with the same name are connected, same as all labels with the same name. |
H: Is 'hot-wiring' two contacts a proper way to bypass a broken button?
I have a broken cell phone, and its power button is not working at all.
Examining the device's motherboard I have discovered the button has five pins. As seen on the manufacturer's service manual, the connections look like this:
My theory is that it could be possible to join the four ground pins into one and create an equivalent circuit by placing two wires out of the motherboard: the first coming out from the GND and the second from the positive part of the chip (the one in the right of the picture).
With this setup, would connecting the two wires one to each other bypass the broken button action and simulate a press to power on the device?
AI: You can bypass the switch in the way you suggest.
You would need to temporarily connect the switch's Pin 1 to a GND node. Any ground connection should work; you don't need to use any of the specific connections soldered to the switch. Also, no need to connect the grounds together.
On the switch, Pin 1 and Pin 2 get connected when the button is pressed (obviously). Pins 3, 4, and 5 are connection to the switch's chassis, and are provided to ground out the switch case. This acts as a shield for EMI.
Be warned: soldering to a cell phone motherboard can be very tricky. It's easy to make it worse instead of better. Also, as mentioned in the comments, the short wires may act like antennas at cell frequencies. I expect this won't be a problem, but it could cause strange stuff to happen.
If you decide to proceed, I recommend placing the phone in Airplane Mode as soon as you can. |
H: Are there standards for digital sensor links?
Following this question, @curious_cat had more questions in comments that deserve a proper Q here:
Is there any standard encoding that has evolved on the digital encoding side? Just as 4 to 20 mA is de facto standard on the analog side especially in process control?
So I wonder, in applications where rapid change is crucial, say airplane control, avionics etc. what sort of protocols have become the convention? Do they use 4 to 20 mA or PWM or dedicated buses?
AI: As @PlasmaHH commented:
XKCD: Standards by Randall Munroe, CC-BY-NC 2.5-licensed
So, no. There's not a standard, mostly because everyone has a different idea of what's the best way to do something specific with a lot of freedom in implementation. Part of the problem certainly is a lack of willingness to sit together to standardize on the side of semiconductor manufacturers – which is certainly why saying "SPI" isn't enough, but you still have to specify clock polarity and signal/clock phase – giving you four non-interchangeable, yet easily convertible SPI "versions".
Note that being the creator of a specific bus for a specific system might have enormous commercial advantages; for example, if you hold the patent describing a unique feature of that bus, you can both get rich by licensing that technology to other companies, and exclude competitors from your market. So, inventing a bus is often done either to exclude a competitor, or in an attempt to enter a market without using someone else's patented technology. |
H: Calculating the core area for a low frequency transformer part 2
I posted a question a while ago about calculating the core size of a transformer when the frequency is relatively low (<50Hz). Thank you to everyone for the comments and feedback. I have done some more research and would like to post a follow up question outlining my thinking. I would really appreciate any comments.
I am interested in calculating the core size for a 20 Hz transformer. Single phase, step down. This is a theoretical question - I understand that low frequency transformers require core sizes that make them impractical, but I am just trying to understand how the core size would be calculated. For the purpose of understanding this, I have treated the transformer as a “perfect” transformer with no core losses - the thinking being that core losses could be factored in after I understand how the core size is worked out. (if this is a mistake, please let me know)
My primary coil has 400 turns & 8.05429 Voltage.
My secondary coil has 50 turns
The formula
(Voltage in Primary/turns on secondary coil)=(Induced Voltage in Secondary/turns on primary coil)
gives the induced Voltage in the secondary as 1.00678625, with a turns ratio of 400:50=8:1
I wanted to then use the following formula-
induced voltage = 4.44*fNA*B
Where
f= frequency in Hz
N= number of turns in the coil
A= cross section of the core area in Meters
B= flux density in the core in Tesla
(I have done some FEMM simulations that give me a value for flux density in the core)
My questions:
1.
I know the saturation point of the core material, so can I switch the formula
induced voltage = 4.44*fNA*B
to
B=induced voltage/(4.44fNB)
and then play around with plugging in values that deliver a value for B that is safely below the saturation point of the core material(e.g. 1.8T)?
2.
How does the turns ratio of 8:1 influence the values for the Induced Voltage and the Number of Turns that I need to put in to the above formula? At first I thought I should just use the primary coil values, then I thought I should add the primary and secondary together. I have chased my own logic around like a dog chasing it’s tail and now lie in a dizzy heap!
If anyone has any advice here, I surely would appreciate it.
Thanks
AI: I know the saturation point of the core material
Core saturation has nothing to do with secondary current or even the extra current drawn by the primary when the secondary is supplying current to a load. This is because the H fields generated in the core from load currents in primary and secondary totally cancel each other. It doesn't sound intuitive but it's the truth that matters!
Core saturation is to do with the small current that flows into the primary when the secondary is, in effect, removed.
So you don't need to switch the formula because whatever you do, and under whatever load conditions, the flux in the core remains the same for no-load and fully loaded conditions.
In fact, due to leakage inductance and copper losses, the core flux reduces on higher secondary load currents.
How does the turns ratio of 8:1 influence the values for the Induced
Voltage and the Number of Turns that I need to put in to the above
formula?
It doesn't affect core saturation
At first I thought I should just use the primary coil values,
then I thought I should add the primary and secondary together.
It's just the primary (driven winding) that produces flux and hopefully, that should be clear by now. |
H: MPS430 Launchpad Keypad
I tried to use my MSP430G2553 launchpad with a keypad using
http://www.pjrc.com/teensy/td_libs_Keypad.html
http://bildr.org/2011/05/arduino-keypad/
http://www.instructables.com/id/Using-the-sparkfun-12-Button-keypad-with-the-ardu/
http://www.ti.com/lit/an/slaa139/slaa139.pdf (i just tried schematic
with Energia code)
http://little-scale.blogspot.com/2011/10/how-to-connect-keypad-to-arduino-simple.html
(prints blank lines in same row , buttons doesn't work except 2,5,8,0
and they print "y")
.
#include <Keypad.h>
const byte ROWS = 4; //four rows
const byte COLS = 3; //three columns
char keys[ROWS][COLS] = {
{'1','2','3'},
{'4','5','6'},
{'7','8','9'},
{'#','0','*'}
};
//byte rowPins[ROWS] = {5, 4, 3, 2}; //connect to the row pinouts of the keypad
//byte colPins[COLS] = {8, 7, 6}; //connect to the column pinouts of the keypad
byte rowPins[ROWS] = { P1_5, P1_4, P1_3, P1_2 };
// Connect keypad COL0, COL1 and COL2 to these Arduino pins.
byte colPins[COLS] = { P1_0, P1_7, P1_6 };
Keypad keypad = Keypad( makeKeymap(keys), rowPins, colPins, ROWS, COLS );
void setup(){
Serial.begin(9600);
}
void loop(){
Serial.print("Serial test\n");
delay(100);
char key = keypad.getKey();
if (key != NO_KEY){
Serial.println(key);
}
}
None of them works, they only print "Serial test" and thats when i press RESET. I tried using latest keypad library from ARDUINO site or the libraries that in those tutorials.
Do i need a different way to wire up the keypad to launchpad or need a different library?
AI: P1_2 is the RX pin for Serial.Since you call Serial after you
instantiate the Keypad things will get messed up.
Source
Using another pin instead of that fixes the problem. |
H: BJH H-bridge going in reverse but not forward, LL transistor overheating
I've built one of this BJT H-bridge to control a DC motor in an old R/C car of mine, but I'm having problems with it and wanted a few tips to help me debug the board.
The motor is a 2-wire DC with about 1.2A stall current.
Below is my schematics.
Note that I've used TIP102 and TIP107 in the actual board, as presented in the original design. I've replaced them for TIP122 and TIP125 in schematics because those were the closest parts I could find in Eagle CAD libraries. All other components I used match exactly the schematics.
Below is my board.
The truth table for the inputs FWA, REV and ENA is shown in the link I provided above, a bit down in the page. In summary, assuming ENA (ENAble, which is active low) is always low (active) we have the following situations:
FWA=HIGH, REV=LOW - The motor gets positive voltage and current, thus going forward.
FWA=LOW, REV=HIGH - The motor gets negative voltage and current, thus going in reverse.
I omitted the other 2 cases (braking and coasting) for brevity.
The board works fine in reverse (situation in the list 2 above). It runs at full speed as it should. The upper-left (UL) and lower-right (LL) transistors heat up a little while the lower-left (LL) and upper-right (UR) transistors remain cold, just as expected.
The problem I'm having is that it doesn't run forward. In that configuration of inputs (situation 1 in the list above), the motor doesn't run at all and the LL transistor alone heats up quickly.
Here's what I know and measured:
Vin = 12V, HIGH logic leve is 5V, while LOW is 0V.
With the motor terminals disconnected, I get +Vin at the terminals with the inputs configured to move the motor forward. With the inputs in reverse, I get -Vin at the terminals, which is expected.
What could be wrong with my board? What other measurements should I make to determine if there are faulty components, short circuits or bad solders or contacts somewhere?
Edit: I found the problem, thanks to Dave Tweed's comment about double-checking the resistor values. The answer was that I switched a pair of the 1K and 10K resistors when routing the board and assembled them incorrectly, thinking they all followed two lines of resistors of equal values. The 1K resistor is highlighted in the image above.
AI: Double-check that you have installed the correct value of resistor in each location.
BTW, the traces that are carrying the heavy currents between the two lower connectors and the four transistors look awfully wimpy to me. In the next revision of the board, beef up (and shorten) those traces as much as possible. |
H: Common Base Amplifier
I am thoroughly confused on this topic. How does "common base" even make sense? If the base is grounded then the transistor is turned off, so how does the input reach the output?
AI: The fact that the base is common to the input and output signals does not imply that the base is at the lowest voltage (assuming npn). Compare with common collector (aka emitter follower): the collector is at the highest voltage, not at the lowest!
In the common base configuration the base is held at a fixed voltage, the input is applied to the emitter, the output is taken from the collector. The input signal will be loaded heavily (input impedance is very low), output impedance is high, current amplification is ~ 1, voltage amplification is very high.
This configuration is sometimes used in HF stages.
With some hand-waving the long-tailed pair (input stage of an opamp) with one input fixed can be seen as an emitter follower + a common base. |
H: How can USB extension cords affect of USB charging?
I want to use USB extension cords to extend the USB power cords for various devices (specifically, an iPad and Android phones). When adding a USB extension cord to a closed system where the manufacturer power cord/adapter connect a device to a 120v outlet, how might charging behavior change and what are the effecting factors? (The "charging behavior" I am most interested in is charging speed and long-term effects on the device battery.)
Sidenote:
I thought this question might be too "end-user" for this site. But I think it adheres to the site guidelines, as it concerns "modifying [consumer] electronics for other uses" and the answer would concern their design.
AI: There will be a voltage drop across the cable. Let's make a test case.
length of extension cable: len = 2 m
gauge of power wires in the cable: 24 AWG, which is typical for USB cables
worst case charge current: I = 2 A, however this can vary a lot between various portable devices. The range is between 0.5 A and 2.5 A.
24 AWG has resistance of 25.67 [Ω per 1000 feet] or 0.086 [Ω/m]. Both power and ground return leads need to be taken in account. The total resistance is R = 2*2*0.086 = 0.342 Ω. Voltage drop V = IR = 0.685 V per Ohm's law.
Nominally, the spec for the USB supply voltage is 5.00±0.25 V. Lower supply voltage would probably not degrade your battery, but it may prevent your charger from working in the first place. Try to get the extension cable with a heavier wire, if your charging current is high. |
H: Power line voltage out of phase with ambient 60Hz electrical noise - why?
I just bought my first oscilloscope and have been tinkering with the measurement of different signals just to learn the ropes and familiarize myself with the scope.
One of the first things I noticed was the ambient 60Hz "noise" that is picked up when a probe is connected to the scope. It is obviously originating from the electrical lines in the building, and I can verify this by holding the probe in my hand and placing my other hand nearer or farther from an electrically powered object or cable - the amplitude of the noise increases or decreases respectively.
Then I probed the mains voltage to see how clean the incoming power is. Next, I viewed the aforementioned 'noise' signal on one channel and the mains power on the other channel.
What I saw was surprising and I don't understand the reason:
The mains voltage shows a clean 60Hz sine wave. The 'noise' signal is a jagged sine wave, still obviously 60Hz but with a lot of static. However, the two signals are out of phase, by what appears to be approximately 90 degrees if my measurement is correct. I would have expected the phase of the mains power to match the phase of the noise.
I was concerned my scope had some sort of delay between sampling the two channels, so I switched the inputs and still saw the exact same phase difference. Attached is a screen capture showing the two signals (red = mains, yellow = noise).
A little more research shows that transformers can cause phase shift between the input and output sides. (TI pdf)
Is this what I'm observing? or is something else in play?
If I'm missing a basic point, please help me understand it or at least guide me in the right direction.
AI: Looks like an almost perfect 90 degrees phase shift. The difference between:
directly coupled therefore no phase shift;
forming an RC phase shift (high pass) filter, coupled through the capacity between the mains wires and your body, and high (but not infinite!) impedance of your scope. |
H: How to monitor data exchange on single wire?
I have two modules that communicate over a single wire. Where would I even start about monitoring the data without introducing any kind of interference to both modules?
One of the modules has to be deleted/bypassed and my own controller will be simulating that module. However, the second module does not work unless it communicates with first module.
AI: You should be able to use a USB-based logic analyzer just fine. They are designed to 'snoop' in on signals without influencing them.
If you look at the specifications of, for example, the Saleae Logic, you can see the input impedance is noted as: 1Mohm || 10pF
This is a fairly high impedance, and that's good. A low input impedance would load down the signal you are trying to examine. The resistive component of the impedance will not change with frequency, but the capacitive component is frequency dependent. It probably doesn't matter in your case, but it's something to keep in mind. As the frequency of the input signal goes up, the logic analyzer will load the signal more and more.
Another way to describe this behavior is provided here, which comes from an Agilent app note.
The logic analyzer probe has a high input impedance. The probe-tip
circuitry consists of a tip resistor on the order of 20 kΩ. At low
frequencies, the probe impedance will look like this resistance. As
the frequency rises, parasitic capacitance in the probe will start to
lower its impedance. The impedance will roll off following a standard
RC response. This could present problems for the target system; as the
probe impedance begins to approach the system impedance, the voltage
divider formed with the probe becomes substantial. A low impedance
will absorb the majority of the signal and cause system failure. |
H: Datasheet or termal resistance for this heatsink
Can someone please help me find a datasheet or at least the thermal resistence for the following TO-220 heatsink?
My local supplier doesn't have more info about it. They just marked it with those numbers that I don't know what they mean: K6439 and 183001/15 (15 is the heatsink height in mm). It looks like it's made of aluminium.
AI: Going to Farnell's heatsink search page and entering your dimensions and "TO220" as search parameters, I can't find the exact model, but there are bunches with thermal resistances in the order of 20-25C/Watt. Yours is a reasonable quality aluminium extrusion so better than some; I would use 20C/W as a reasonable figure unless you can find the actual data.
Which means at 7.5W from yesterday's question ... 150C expected temperature rise...
The other option is airflow, of course...
Theoretical approach: you could re-run those numbers (get a datasheet from a proper heatsink!) and see what airflow would make it work, and if a small fan (another datasheet) could push air that fast.
Experimental approach : attach a thermocouple (I got one with a £10/$15 multimeter - no longer sold but it came from Maplin) to the heatsink, and measure the temperature as you draw more current. (Start at 0.25A on one reg, given your other question. Point a CPU fan at it, add ducts, re-measure temperature. Repeat until satisfactory. |
H: Difference between natural response and forced response?
Reference
Second post on EdaBoard.com
Time response of a system is the time evolution of the variables. In
circuits, this would be the waveforms of voltage and current versus
time.
Natural response is the system's response to initial conditions with
all external forces set to zero. In circuits, this would be the
response of the circuit with initial conditions (initial currents on
inductors and initial voltage on capacitors for example) with all the
independent voltages set to zero volts (short circuit) and current
sources set to zero amps (open circuit). The natural response of the
circuit will be dictated by the time constants of the circuit and in
general roots of the characteristics equation (poles).
Forced response is the system's response to an external stimulus with
zero initial conditions. In circuits, this would just be the response
of the circuit to external voltage and current source forcing
function... continue reading
Questions
How can there be even a natural response? Something has to be inputted to create an output? The way I see it is like turning of the main water line and then turning on your faucet and expecting water to come out.
How can we v(t) (from the link above) be solved for if we don't know dv(dt) in order to find the natural response?
If you can please expand on the 2 concepts (natural response and forced response) by explaining their differences in Layman's terms, it would be lovely.
@Felipe_Ribas Can you please confirm this and answer some of the questions? (you can just edit this directly if you want)
Given an an equation 10dy/dt + 24y = 48 means rate of change of output + 24 * output = 48. The initial conditions are y(0)=5 and dy/dt=0.
That would mean that the input is 48/(24*5) Is that a correct assumption? The solution to that is 0.4 which is the constant input?
AI: Think about a simple mechanical system like an elastic bar or a block attached to a spring against gravity, in real world. Whenever you give the system a pulse (to the block or to the bar), they will begin an oscillation and soon they will stop moving.
There are ways that you can analyze a system like this. The two most common ways are:
Complete solution = homogeneous solution + particular solution
Complete response = Natural resopnse (zero input) + forced response (zero state)
As the system is the same, both should result the same final equation representing the same behavior. But you can separate them to better understand what each part means physically (specially the second method).
In the first method, you think more from the point of view of a LTI system or a mathematical equation (differential equation) where you can find its homogeneous solution and then its particular solution. The homogeneous solution can be viewed as a transient response of your system to that input (plus its initial conditions) and the particular solution can be viewed as the permanent state of your system after/with that input.
The second method is more intuitive: natural response means what is the system response to its initial condition. And forced response is what is the system response to that given input but with no initial conditions. Thinking in terms of that bar or block example I gave, you can imagine that at some point you pushed the bar with your hands and you are holding it there. This can be your initial state. If you just let it go, it will oscillate and then stop. This is the natural response of your system to that condition.
Also you can let it go but still keeps giving some extra energy to the system by hitting it repeatedly. The system will have its natural response as before but will also show some extra behavior due to your extra hits. When you find your system complete response by the second method, you can see clearly what is the system natural behavior due to those initial conditions and what is the system response if it had only the input (with no initial conditions). They both together will represent all the system's behavior.
And note that the Zero State response (Forced response) also may consist of a "natural" portion and a "particular" portion. That is because even with no initial conditions, if you give an input to the system, it will have a transient response + permanent state response.
Example response:
imagine that your equation represent the following circuit:
Which your output y(t) is the circuit current. And imagine your source is a DC source of +48v. This way, making the summation of element's voltage in this closed path, you get:
\$\epsilon=V_L+V_R\$
We can rewrite the inductor voltage and resistor voltage in terms of current:
\$\epsilon=L\frac{di}{dt} + Ri\$
If we have a power source of +48VDC and L = 10H and R = 24Ohms, then:
\$48=10\frac{di}{dt}+24i\$
which is exactly the equation you used. So, clearly your input to the system (RL circuit) is your power supply of +48v only. So your input = 48.
The initial conditions you have are y(0) = 5 and y'(0) = 0. Physically it represents that at a t=0 moment, my current of the circuit is 5A but it is not varying. You may think that something happened previously in the circuit which left a current in the inductor of 5A. So in that given moment (initial moment) it sill has those 5A (y(0)=5) but it is not increasing or decreasing (y'(0) = 0).
Solving it:
we first assume the natural response in the format: \$Ae^{st}\$
and then we will find the system behavior due to its initial condition, just as if we had no power supply (\$\epsilon=0\$) which is the Zero-Input response:
\$10sAe^{st} + 24Ae^{st} = 0\$
\$Ae^{st}(10s + 24)=0\$
\$s=-2.4\$
So,
\$i_{ZI}(t)=Ae^{-2.4t}\$
Since we know that i(0) = 5:
\$i(0)=5=Ae^{-2.4(0)}\$
\$A=5\$
\$i_{ZI}(t)=5e^{-2.4t}\$
Note that until now everything is consistent. This last equation represents the system response with no input. If I put t=0, I find i=5 which correspond to the initial condition. And if I put \$t=+\infty\$ I will find i=0 which also makes sense if I do not have any source.
Now we may find the particular solution to the equation which will represent the permanent state due to the power supply presence (input):
we assume now that \$i(t)=c\$ where \$c\$ is a constant value which represents the system output in the permanent state since the input is also a constant. For each system, the output format depends on the input format: if the input is a sinusoidal signal, the output also will be. In this case we have only constant values which makes things easier.
So,
\$\frac{di}{dt}=0\$
then,
\$48 = 10\cdot0 + 24c\$ (using the differential equation)
\$c=2\$
\$i(\infty)=2\$
which also makes sense because we have a DC power supply. So after the transient response of turning the DC power supply ON, the inductor will behave as a wire and we will have a resistive circuit with R=24Ohms. Then we should have 2A of current since the power supply has 48V across it.
But note that if I just add both results to find the complete response, we will have:
\$i(t) = 2 + 5e^{-2.4t}\$
Now I messed things up in the transient state because if I put \$t=0\$ we no longer will find \$i=5\$ as before. And we have to find \$i=5\$ when \$t=0\$ because it is a given initial condition. This is because the Zero-State response has a natural term which is not there and also has the same format as we found before. Adding it there:
\$i(t) = 2 + 5e^{-2.4t} + Be^{st}\$
The time constant is the same so it only left us B:
\$i(t) = 2 + 5e^{-2.4t} + Be^{-2.4t}\$
And we know that:
\$i(t) = 2 + 5 + B = 5\$ (t=0)
So,
\$B=-2\$
Then, your complete solution is:
\$i(t) = 2 + 5e^{-2.4t} - 2e^{-2.4t}\$
you may think of this last term we find as a correction term of the forced response to match the initial conditions. Another way to find it is imagining the same system but no with no initial conditions. Then solving all the way again, we would have:
\$i_{ZS}(t) = 2 + Ae^{-2.4t}\$
But as we now are not considering the initial conditions (i(0)=0), then:
\$i_{ZS}(t) = 2 + Ae^{-2.4t} = 0\$
And when t=0:
\$A=-2\$
so the forced (Zero-State) response of your system is:
\$i_{ZS}(t) = 2 - 2e^{-2.4t}\$
It is a bit confusing but now you can view things from different perspectives.
-Homogeneous/Particular solutions:
\$i(t) = i_p(t)+i_n(t) = 2 + 3e^{-2.4t}\$
The first term (2) is the particular solution and represents the permanent state. The rest of the right side is the transient response, also called homogeneous solution of the equation. Some books call this also Natural response and Forced response since the first part is the forced part (due to the power supply) and the second part is the transient or natural part (system's characteristic). This is the fastest way to find the complete response I think, because you only have to find the permanent state and a natural response once. But may not be clear what is representing what.
-Zero input / zero state:
\$i(t) = i_{ZS}(t)+i_{ZI}(t) = 2 - 2e^{-2.4t} + 5e^{-2.4t}\$
note that is the same equation but the second term is splitted in two. Now, the first two terms (\$2 - 2e^{-2.4t}\$) represent the Zero-State response. In other words, what would happen to the system if there was no initial current and you turned ON the +48V power source.
The second part (\$5e^{-2.4t}\$) represent the Zero-Input response. It shows you what would happen to the system if no input was given (power source remained in 0v). It is only an exponential term which would go to zero since it has no input.
Some people also call this Natural/Forced response format. The natural part would be Zero-Input and the Forced part would be the Zero-State, which by the way is composed by a natural term and particular term.
Again, they all will give you the same result which represents the whole situation behavior including the power source and initial conditions. Just note that in some cases it might be useful to use the second method. One good example is when you are using convolutions and you may find the impulse response to your system with Zero-State. So breaking those terms might help you to see things clearly and also using an adequate term to convolve. |
H: What is a Nokia FAID?
Nokia is a mobile phone brand, but what is an FAID? I tried searching but no definition came up. I'm guessing it's a security feature checked by the hardware in order to function properly.
AI: I brought it into context with firmware updating Nokia phones and came up with this:
The reference states that when the firmware has been altered this happens:
Your phone should power on at this point but it will keep on restarting and will not receive a signal.
And further explains why and how to solve it:
[...] when you alter the flash memory within your phone there are certain security checksums that need to be updated. The checksum that we need to update is called the FAID (Flash Authority ID) [...] it is quite a simple process! |
H: Looking for suggestion on microcontroller
I took a course on using microcontroller of Intel 8051 long times ago. Recently, I am going to develop a module to control precise-timed TTL output and my instructor suggests to use microcontroller (MCU) instead of something like 555 timer. I am planning to get one for trial but I really don't have much experience on that.
My project is to create a time sequence with interval about 0.01 to 0.5 microsecond with function generator, which required the sampling frequency of the fun generator to be about 5 to 100MHz. But using function generator is not portable and I cannot program the function besides creating the timed sequence. I cannot generating that sequence with PC because the minimum time interval for interrupt is 1ms and it is too long for generating such short time interval pulses. Also, more importantly, even I generating 1ms-interval pulses with PC, since I don't have the real-time system, it doesn't generate time pulses at the expected interval precisely. Do you think MCU's interrupt is in real-time, i.e. how precise if I want to generate pulses like in 0.1ms for example?
I am also thinking an extreme case with super short pulse (1ns), does any sort of MCU support that short of interval?
My last question is about the code writer. I remember it needs a flash burner and compiler to feed the code into the chip 8051 while I was taking my course. But they got a development board in lab so everything go in one step. I wonder how does it works in practical development. Do I have to buy something like flash writer to write the code to the chip? The layout of the chips are different from manufacturers to manufacturers, so does it mean I have to buy different flash writer for different model? My budget for chip (and writer if needed) is about 20 bucks to 40 bucks. Any model to suggest is highly welcomed.
AI: Modern dev or evaluation kits tend to come with all needed bits needed to get the MCU programmed and running. Some are more generous than others, by working for many different microcontrollers, while others will only work with the specific MCU or Line of MCUs that it comes with. And they are a lot cheaper than they used to, with a larger scale of potential buyers, the ease of computers and USB being well, universal and very cheap to create for.
One specific line that I like is the Texas Instruments ecosystem of Launchpads. They have the ValueLine MSP430s, small general purpose microcontrollers. There is the USB F5xxx Launchpad, the bigger brother of the Valueline, and allows for USB Peripheral programming. They have the Tiva C and Hercules ARM Launchpads, if you want to move more towards low end computing microcontrollers. A variety of them, and all come with the FET debuggers for programming and debugging, and between 5 and 30 dollars depending on which one you get. |
H: Soldering a VGA cable - number of wires doesn't match
I decided to open up a VGA cable which started to produce only red / blue / green colored image. I thought that inside I will find something like shown on this scheme:
Instead I found that my VGA cable is a more complex one. Allso the wire colors do not mach with the ones in the scheme above:
It has 11 wires instead of just 6. My cable has yellow and cyan cables for V-sync and H-sync but then there are these colours (white, gray, brown, black, dark red and bright red) which I don't know where to solder. I need an advice.
AI: You actually have 14 wires if you count the ground return for each colour, it may use one of those on pin 5 (GND) as in your diagram to complete the set of 15. The other pins are often used for communications. For example from VGA connector on Wikipedia some of the ones your diagram showns as not connected are:
Pin 4 - ID2/RES - formerly Monitor ID bit 2, reserved since E-DDC
Pin 9 - KEY/PWR - formerly key, now +5V DC
Pin 11 - ID0/RES - formerly Monitor ID bit 0, reserved since E-DDC
Pin 12 - ID1/SDA - formerly Monitor ID bit 1, I²C data since DDC2
Pin 15 - ID3/SCL - formerly Monitor ID bit 3, I²C clock since DDC2
The reference to DDC refers to the Display Data Channel that the host can use to query the monitor for its capabilities such as supported resolutions and refresh frequencies. So while I believe in many cases they could be left disconnected with the loss of that functionality they won't all be additional ground signals as you mentioned in a comment.
If you're feeling experimental (and there are no guarantees this won't cause damage) because I'm not sure if the colours are 100% standard I'd try the following:
Connect the RGB center wires to pins 1/2/3 with the shields going to pins 6/7/8 respectively.
Short pin 8 to pin 5 to get the additional ground.
Connect the other wires as per your diagram leaving unknown ones disconnected
Now see if you can get an image on the projector by forcing a resolution you know that it supports because you won't have the DDC channel. If that works OK and it's not a problem leave it at that. Otherwise you could probably identify the remaining pins by measuring between ground and the DDC2 pins 12 / 15 using a scope during initial negotiation to check for the square clock versus data. The 5V to go to pin 9 should be easy enough to spot and is presumably present the whole time. |
H: Maximum distance for AVR UART on 3.3V
I need to connect two boards using UART of AVR. I wanted to know how much can be the maximum length of the cable between two baords if VCC is 3.3V. I dont want to use rs232 here.
This question didnt help
AI: I can't answer as to the maximum length, but at 9600 baud I wouldn't expect problems at a metre or two, let alone 10cm. Just ensure there is a good ground wire connecting the two boards.
And that assumes it's permanently wired, and working in a typical lab environment, or your home, rather than in the engine compartment, or in orbit or on a mountain top.
Even in the lab, if you plan to unplug and reconnect it regularly, add that info to the question and someone will advise on suitable ESD protection. |
H: What is this go-no go function in some oscilloscopes and how can I use it?
I have seen some oscilloscopes with this option (go-no go function), but I don't know what it is.
I couldn't find anything about it in my searches.
What is it?
AI: See Mask Limit Testing
Capture a signal from a known working system, and [the oscilloscope] will draw a mask around it with your specified tolerance. Connect the system under test, and [the oscilloscope] will highlight any parts of the waveform that fall outside the mask area.
This can be used to check for intermittent problems.
From Agilent
Some scopes have a pass/fail output that can be used in automated testing. |
H: Design an amplifier with maximum voltage gain
Problem description is as follows: Design an amplifier with maximum voltage gain without output saturation.
The following diagram was given:
I've tried doing a T-model for the first part that goes up to C2, and I got the following:
Vo=gmVgs(RE1)
Vin=gmVgs(1/gm)
Resulting in the following gain:
Av = vo / vi = gm*RE1
I have a feeling this result doesn't seem right.
For the sake of honesty, to say that I'm not sure what direction to take or how to attack this problem would be an extremely mild way to put it.
Another issue (?) I'm having is that I'm looking on the manufacturer's datasheet for the MOSFET I'm using, and there is no value for the transconductance paramater kn*(W/L), which I need when calculating the drain current in DC analysis, and 'gm' in small-signal analysis.
AI: If the real question is "fill in the component values to get maximum output voltage without clipping for the given input signal.", then this question can't be answered except in the specific case of making C4 zero (removing it). Can you see why? Also, you don't have any choice over the DC gain. You can make the AC gain higher than 1, but if you're stuck with this circuit topology you really can't make it all that predictable. Can you see why that is?
If I was given this problem, I'd probably write a answer explaining why this topology is unreasonable for achieving the goal, which is also too vague to be a real spec. I'd back that up with a detailed analisys and critique of the circuit, explaining exactly what the problems are, what important specifications are missing, and why a predictable result is not possible. Maybe I'd flunk the question, or maybe I'd get extra credit. However, I'm not going to engage in irresponsible engineering just to tell the instructor what most will assume he wants to hear.
I think what the instructor wants you to realize is first what gain you actually need, then how to adjust the component values to achieve that gain.
First let's look at the gain required. The input is only specified to be "10 mV". That leaves a lot unsaid. I'd probably take that as being a AC 10 mV RMS signal. When looking at amplifier clipping, it's the peaks that matter. The first thing you should think about is what peak to peak voltage is implied by "10 mV" in this case. I'd consider that to be at least a sine wave, but if it's something like a audio signal you will need some headroom past what a sine wave would peak at for the given RMS voltage. Think about this carefully. What peak to peak input voltage are you going to assume?
Next, look at what the output can do. Q2 is biased by ground, so the extent of the collector voltage will be roughly from ground to the 10 V supply. For good design, I'd probably leave a little room so that the worst case peak-peak input resulted in about 9Vpp out.
From the above two analisys, you should be able to decide the voltage gain this amplifier needs. As others have noted already, the first stage is a emitter follower, which will have a voltage gain of basically 1. It can be used to lower the impedance of the input signal, but it's not going to provide any voltage amplification. Do all the parts really need to be there? Put another way, consider that you could specify part values as 0 (short) or infinite (open). Consider what exactly RC1 is doing for you, for example.
All the voltage gain is going to come from the second stage (the circuit around Q2). C2 helps in that it decouples the DC operating points of the first and second stages. Can you see how you don't really have much choice over the DC voltage gain of this stage, and how you don't have any choice at all over the DC gain of the whole amplifier? In fact, you're not going to get more than a DC gain of about 1 from this stage. This may not be so obvious. Take a look at it carefully and tell us why.
C4 can effect the AC voltage gain, but can you make that predictable and not a function of the frequency?
Overall, this is a much more tricky problem than others seem to taking it as. I suspect that the instructor messed up or there are additional specifications you haven't given us. Or possibly the instructor wants you to find the problems in what looks like a straight forward assignment at first glance. |
H: Simplest way to measure 98 to 140 Ohm range with an ADC?
I have a thermistor that should vary between 98 and 140 Ohm and am looking for the simplest circuit that will convert this into something meaningful for a 10 bit ADC in the 0-3.3V range.
Precision wise, 0.4 Ohm or so on average would be great, and let's just ignore the nonlinearity issue altogether. I'd rather save some circuitry than shoot for optimal results (and there will be a lookup table).
I have (only) +3.3V of input voltage, plenty of current, a dual op-amp that is hopefully useful somehow, and a bunch of resistors and some other crap. Suggestions?
AI: The simplest way is to use a resistor pullup (or pulldown) matched to the thermistor range to achieve the maximum voltage range output. 140Ω / 98Ω is a ratio of 1.43. To get the maximum response with this being one of the resistors of a voltage divider, we want to divide that range in half, which means taking the square root of the ratio. Sqrt(1.43) = 1.20. This means the center value of the voltage divider should be when the thermistor is 1.20 times its minimum, which is also its maximum divided by 1.20, which is 117 Ω. The nearest common value of 120 Ω will be close enough to still give you basically the maximum possible output.
So now we have:
The R2-R1 voltage divider divide ratio will change as a function of temperature as R2 changes. C1 is there only to reduce noise. You know a thermistor just can't change that fast, so it will reduce some of the high frequency content that you know can't be real signal. In this case, it will start attenuating above around 250 Hz, which is well above what any ordinary thermistor can do.
The next step is to figure out what voltage range you will get. This is just solving the divider for the two extreme cases, which are 120/(120 + 98) and 120/(120 + 140). Multiplying these by the 3.3V input, we get 1.82 V and 1.52 V, for a total range of 293 mV.
If you just run the voltage divider output straight into the A/D input, then you will be using 8.9% of the range, or about 91 counts. If 1 part in 91 is good enough, then you don't need to do anything further.
To get better resolution, you can amplify this signal about the midpoint of half the supply voltage. To bring it to a full scale signal, you'd need a gain of 3.3V / 293mV = 11. It's good to leave some headroom and not force the opamp to go completely rail to rail, so a gain of 8 or so would be good. That would give you lots more A/D counts of the temperature range than the accuracy of the parts can support. |
H: Is an instrumentation amplifier design same with a standard op amp?
Is instrumentation amplifier nothing but a standard op-amp with a different configuration? I wonder if the inner complicated circuit of an instrumentation amplifier same with standard op-amp's.
At one forum I read: "An instrumentation amplifier is nothing more than a standard op-Amp configured in a differential mode with isolation buffering
to the inverting input and non-inverting input." Is that true? I mean is the only difference configuration or inner circuitry? Does that mean any op-amp can be used as an instrumentation amplifier.
But here at page 9 of pdf document it says: "In contrast, if a standard op amp amplifier circuit were used in this application, it would simply amplify both the signal voltage and any dc, noise, or other common-mode voltages. As a result, the signal would remain buried under the dc offset and noise" : here is the link: http://www.analog.com/static/imported-files/design_handbooks/5812756674312778737Complete_In_Amp.pdf
Im asking about the IC part of the amplifier if it is manufactured the same way.
AI: A instrumentation amplifier is more than just a opamp hooked up as a differential amplifier. The inamp puts buffers in front of each of the diff amp inputs. This presents a high impedance to the outside, and also eliminates the cross impedance between the inputs of a bare diff amp.
In theory you could make a inamp with three opamps, but in practise that still won't be as good. A real inamp benefits from all the components being on a single die, which help in matching thermal characteristics and ballancing the gain of each input to get better common mode rejection. Precision inamps go thru a laser trimming process in production to get the absolute and relative gains just right. |
H: Why do we need the MOESI/MESIF protocols?
As I understand, those two protocols add an extra state to identify which cache should respond to a miss request from another cache for a particular cache-line.
But, in the MESI protocol, only one cache can have a cache-line 'A' in the modified state. The other caches can have 'A' in the invalid state or not at all in the cache.
Instead of adding an extra state, can we not just make the cache which has the requested cache-line in the modified state respond to a miss request generated by another cache for said cache-line?
Additionally, is there another motive for introducing the MOESI/MESIF protocols?
AI: MOESI allows more than one cache to have a copy of a block that is dirty (relative to main memory). MOESI provides all four states one would expect for the two binary traits present-in-only-one-cache/possibly-present-in-multiple-caches (M,E/O,S) and needs-to-be-written-to-memory-on-eviction/can-be-invalidated-without-writeback (M,O/E,S). (The invalid state excludes all four of those states.)
Under traditional use of MESI, if a cache miss hits a Modified block in another cache, the cache block would be written to memory, marked as Invalid in the providing cache, and marked as Exclusive in the requesting cache. Obviously, the M state could be transferred to the requesting cache (avoiding the writeback to memory), but the providing cache would still need to invalidate the cache block so multiple caches could not hold the block at the same time.
Avoiding writebacks can reduce the use of main memory bandwidth. (Alternatively, more flexible scheduling of writebacks which can distribute spurts of memory activity or exploit the lower cost of multiple accesses within the same DRAM row.)
The addition of the Forward state is intended to reduce the amount of coherence traffic by having at most one cache respond with data. (The Owned state provides a similar advantage but only if the "shared" block is dirty relative to memory.) On a cache miss, a request can be broadcast and any cache with the block in M, E, or F state can provide the data. With MESI, every cache with the block in Shared state would respond with the data.
(If the interconnect between caches was sufficiently loaded that there was buffering delay, this delay might be sufficient to check if the cache[s] at a network node had a block in a single-provider state [M, E, F], allowing a read request broadcast not to be sent further. Other, more practical optimizations are likely possible. "MESIF: A Two-Hop Cache Coherency Protocol for Point-to-Point Interconnects" might provide more information. [I have not read that paper, and my understanding of MESIF is even less than my understanding of MOESI.]) |
H: Calculating the voltage generated by an 40KHZ Ultrasonic Tranceiver
In particular, this device: http://www.hexamite.com/hx40str.htm
Electrical Specifications HX40STR
----------------------------------------------
Parameters Values Units
Operating Frequency 40 kHz
Input Voltage 42 (max) Volts (p-p)
Output (SPL) @20V 112 db
Receive (Sensitivity) -60 db/V/Ubar
Impedance 300 ohm
Beam Angle ±12 degrees
Bandwidth 1 kHz
Settling Time 5 mS
Temperature -40 to 100 °C
Assuming the scenario that two trancievers are used, T1 as transmitter, T2 as receiver, 125mm apart. (Assuming no attenuation over distance.)
Output SPL at 20V => 112 db.
Converting pressure to uBar (because sensitivity is in uBar):
P = pressure in uBar
112 = 20 log (P/0.2e-3)
P = 0.2e-3 * 10 ^ (112/20)
P = 79.621 uBar
Input Sensitivity => -60 db V/uBar
S = Sensitivity in V/uBar
-60 = 20 log (S/1)
S = 10 ^ (-60/20)
S = 1e-3 V/uBar
Final Voltage V = P * S
V = 79.621 uBar * 1e-3 V/uBar
V = 0.0796721 V
However, looking at the graph "G3, Amplitude vs Excitation"
"Above: Continuous exitation voltage oscillating at 40Khz is applied to T1. The T2 resulting amplitude is plotted above."
We see that the output voltage at 20V is 0.6V. This also excludes the attenuation over the space of 125mm, which I've calculated to decrease the output V to 0.079 V to ~0.075 V.
What am I missing in my generated voltage calculation?
Why is there a discrepancy?
AI: I don't know for sure, but one possibility might be that the quoted SPL measurement has a condition attached, such as a measurement distance. It ought to, and a common distance for SPL measurements is 1 metre.
In which case the voltage measured at only 125mm distance could be expected to be greater.
If acoustic power varies as the inverse square of distance, then voltage will vary as the inverse of distance, and this would lead to 8x the expected voltage at 1/8 distance. |
H: DC-DC 12V converter using switching voltage regulators: heatsink dimensioning and board design
To replace my overheating linear regulator DC-DC converter, I've designed this new board. This time I used a couple of switching 12V regulators (LM2575-12V). I'm using two 1 amp regulators because my local supplier doesn't have a 2 amp version.
Here are the schematics.
Here's the board.
There are no pictures of the board because it doesn't exist yet. This time, I did the math first and didn't blindly built the thing.
The red arrows highlight the single-point grounding I tried to make, as recommended in the datasheet. Is that design correct?.
Here is my math for calculating the temperature rise from ambient temperature. According to the LM2575 datasheet, the power dissipation is calculated as follows:
$$PowerDissipated = V_{in} .I_q + (V_o/V_{in}).I_{load}.V_{sat}$$
$$V_o = 12V, V_{in} = 18V, V_{sat} = 1.4V, I_q = 10mA, I_{load} = 1A $$
$$PowerDissipated = 0.18W + 0.93W = 1.11W$$
Again, from the LM2575 datasheet, its termal resistance (junction to ambient, in worst case) is 65°C/W, which would keep the regulators just below 100°C considering a ambient temperature of 25°C. If I use my heatsinks with 20°C/W and the regulator thermal resistance (junction to case) of 2°, that would keep the regulators under 50°C
My questions are:
1. Is my heat dissipation calculations correct?
2. Is my board designed correctly, especially regarding the required single-point grounding for the regulator pins? These points are marked by the red arrows on the board image.
AI: That 65 C/W is for a SOCKET without no pcb heatsinking copper. If soldered, with appropriate copper layout, it goes down to 45°C/W, or less, Junction to Ambient.
As for your heatsinks, they are ~20°C/W, but you forgot to add the Junction to Case rating of 2°C/W. So 25 Ambient + 20 Heatsink + 2 Junction to Case = 47°C/W * 1.2W = 56.4°C Junction Temperature.
Key points, Look at notes 8 - 11 on page 7 of the pdf, and consider the board layout (you have tons of empty board spacing no need to have everything so close together).
Page 19 also has good information:
HEAT SINK/THERMAL CONSIDERATIONS
In many cases, no heat sink is required to keep the LM2575 junction temperature within the allowed operating range. For each application, to determine whether or not a heat sink will be required, the following must be identified:
1. Maximum ambient temperature (in the application).
2. Maximum regulator power dissipation (in application).
3. Maximum allowed junction temperature (150°C for the LM1575 or 125°C for the LM2575). For a safe, conservative design, a temperature approximately 15°C cooler than the maximum temperature should be selected.
4. LM2575 package thermal resistances θJA and θJC.
But then you realize, The LM2575 is characterized for operation over the virtual junction temperature range of -40°C to 125°C. At 1.2W (I'm rounding up a bit) and worst case 65°C/W Junction to Ambient, that's still only 78°C Junction Temperature. Almost 50°C below it's maximum operating temperature. Worst case, socket, no proper pcb copper sizing, no heatsink, and you're still good to go. ** Rearrange the traces, and throw on your heatsink, and you're golden. You might need to move the L1/L2 inductors or the heatsink won't attach right. Ideally, you would have the Ground Pin 3 connected directly to the large ground plane.**
Just bare in mind, I hope you have selected the right layout for the 2575 you are getting, as it has multiple versions.
Finally, TI has the Switchers made Simple software here: http://www.ti.com/ww/en/simple_switcher_dc_dc_converters/index.html?DCMP=simple_switcher&HQS=switcher that can help (Though the LM2575 is not included). Also, this article http://store.curiousinventor.com/blog/pcb-as-a-heat-sink-calculating-trace-width-for-given-current can help give you some ideas. |
H: Flash Memory on an ASIC
Are flash memory and ROM also considered integrated circuits? And so, is it possible to embed a flash memory or a ROM into an ASIC?
AI: A ROM which is constructed on a monolithic device would be an integrated circuit. A ROM which is constructed using discrete decoders, multiplexers, and diode matrices (such things existed long before integrated circuits were ever constructed) would not be.
It is extremely common to embed ROM within an ASIC; it is somewhat less common to embed flash memory. Flash memory is very useful in things like microcontrollers, since it allows a single kind of stock microcontroller part to be sold to many different customers for many different applications. Many ASICs are made for sale to a particular customer for particular requirements, and in many cases that customer will know what is required. Consequently, it is far more common for an ASIC to be paired with a "masked" ROM whose contents are permanently established when the chip is manufactured. If a device isn't going to be upgradable after it is sold, the only advantage of a flash device over a masked ROM would be the ability of the manufacturer to make firmware revisions between the time a chip is manufactured and the time the containing device is sold. In many cases, it will be more cost-effective for the manufacturer to invest extra time and effort making sure the final firmware is ready before the device is built than to spend the extra manufacturing costs of making the chip upgradable after the fact. |
H: How to understand the functions of a Therm-O-Disc 12S20 H24V?
I am an electrical novice and have a Therm-O-Disc 12S20 H24V (original image) relay that I'm attempting to understand.
The only meaningful documentation I've been able to find is this, 12S, 14S, 15S Series Time Delay Relays and Sequencers. Page 3 of the PDF (recorded as page 7 in the footer) of the documentation shows a table which indicates on and off timings in seconds. Could someone explain what this means?
Also, how would I determine what the control voltage is to activate the relay?
AI: You did your homework! Thank you for getting this datasheet out.
This is not a 'normal' relay. It is of a time delay type.
I believe your relay will perform an action after a certain time delay. Internally the device seems to compensate for different operating temperatures so that you get consistent delays. You can determine the pinout by the part number, 12S20 H24V on page 9
The control voltage in your case is the 'standard' (as per datasheet) 24VAC. They have data for 120VAC, 240VAC and 277VAC as well for those different models.
The time delay is specified somewhere in the part number. You really need to contact the manufacturer because this datasheet is somewhat vague. The only hint I saw was
A variety of standard timings are available for general time delay applications.
You appear to have a relay that delays between 22 and 55 seconds to turn on and between 15 and 45 seconds to turn off. All of the information appears to be marked on the device.
For more details please contact the manufacturer.
Their information |
H: Bringing compiled device to market (and how to power)
(Preemptive apology if this is the wrong place to ask; pointers to the right place appreciated)
I've thrown together a device using a Raspberry-pi, touchscreen (plugs into pi) and small monitor type thing which has its own power cord and brick. I've for the time placed everything into a small radio shack project box.
Right now, there is an extension cord with 2 outlets on it that run into the box which the pi and other device plug in to. The idea is that there is only 1 cord that needs to be plugged in for power; would much prefer if there weren't multiple power cabled needed.
This device is not so much to sell for now, but as an adjunct to an existing (very small; read: no budget) business. I have no idea where to go from here. Several google searches for "selling electronic device" etc have provided no help.
I'm trying to determine if there are regulatory or other regulations that this would fall into. FCC? UL? CE?
NB: I'm in the US with no immediate plans to use the device overseas.
Where would be a good place to ask for details? If the answer is lawyer up, what kind of lawyer should I look to talk to?
Also, are there resources I could look to to try and find a better way to power the devices? The Pi is powered via mini-usb and a transformer that came with it (think iDevice power brick). The block did not fit into the project box as it was, so I had to disassemble it. This makes me nervous because fire. What would be the recommended way to break out power from a single cable from wall to multiple devices?
AI: Definitely DO NOT dismember the AC adaptor and imbed it in your device. Use a UL-approved wall-wart power supply (or an iDevice power block) as-is to provide power to your Pi, and provide power to the second device from the Pi box.
If you run 120VAC into your box, you will need regulatory approvals. The reason so many things use external power supplies these days is so that the device manufacturer doesn't need to worry about UL or similar regulations - the power supply manufacturer takes care of that. |
H: How to modify a router's antenna?
I have tried to extend the antenna of my router in order to increase its range. I added a wire few cm long. There was no modification however in the range. The power (-db/m) was the same. Why is that?
AI: You can replace the antenna with one that has higher gain , there are several types but in order to get high gain the antennas will have a pretty narrow beam pattern (will be directional).
A simple way to get some additional gain without changing your antenna is to use a reflector, for example http://www.binarywolf.com/249/diy-parabolic-reflector.htm
The reflector I talk about looks like |
H: Is it possible to buy high pass filters that work in the 100 kHz range?
I want a device that rejects 100 kHz but lets through 150 kHz. I tried building a simple passive first order circuit but the roll-off wasn't sharp enough. Most standard electronics vendors don't seem to sell anything below 1 Mhz.
Alternatively, I could also use a frequency tripler that accepts input at 50 kHz - again most vendors didn't seem to have any that worked below 1 Mhz.
AI: Use a parallel LC tuned circuit in series with your input. Theoretically it has infinite impedance and if tuned correctly it should dramatically reduce signals at 100kHz. Follow this with a series tuned circuit at 150kHz to further enhance your signal with respect to all other sources of noise or interference: -
Here's one I quickly put thru the simulator. Notice that at 100kHz there is almost 50dB rejection whilst at, and around, 150kHz the attenuation is about 14 milli dB. |
H: Connecting strain gauge output terminals to an instrumentaiton amplifier
Before posing the question the figures are about as follows:
In Figure 1 a strain gauge's outputs v1 and v2 is connected to an instrumentation amplifier’s inputs. For simplicity buffering part is not drawn and I'm assuming the signals are balanced with equal line impedance. Let’s say I measure v1 and v2 with respect to ground such as: v2=0.390mV and v1=0.400mV. So that v1-v2=0.005mV. If the inamp is not ideal and obtaining common-mode voltage vcm = (v1+v2)/2= 0.395mV will affect the output voltage since every non-ideal amplifier has common mode gain.
In Figure 2 the same outputs from the same strain gauge goes into a mysterious circuit called M which converts v1 and v2 to v1’ and v2’ such as v1’= (v1-v2)/v and v2’=-(v1-v2)/2. So that v1’= 0.0025mV and v2'=-0.0025mV so that again v1-v2=0.005mV. But in this case obtaining common-mode voltage vcm = (v1+v2)/2= 0mV will not affect the output voltage since it is zero.
Now my question is: Which way is used in real life? If exists what is the mysterious circuit M in my Figure 2 which gets rid of the the common-mode voltage? I’m asking this question because we are using instrumentation amplifiers often for strain gauges.
AI: I design strain gauge amplifiers and use a fair amount of gain and my observations are that imbalances in the bridge resistors are the biggest source of error. Having said that I go for amps that have good CMRR because the gauges are wired up to 10m distant and can pick up a fair amount iof noise. I don't use anything in between the bridge and IA.
If you're using quarter bridge active devices, as per your diagram I use constant current feed for excitation because it's twice as linear ie fewer theoretical errors.
I also use auto balance mechanisms on the REF pin of the IA fed from a DAC. |
H: Compilers and MSB identification for Data types
I would like to know two things that is making my head quite confusing these days.
If i allocate an int in C18 compiler I know it will take two bytes and, if I initialize a pointer to the variable, say:
int x,*xptr;
xptr = &x;
then I have to access *xptr for first byte and *xptr+1 for the second byte?
Well, what factor determines that which one is MSB and LSB?
AI: The order is defined by the compiler. If you look at the MPLAB C18 C Compiler User's Guide and search for "endianness" on page 12 you'll find:
Endianness refers to the ordering of bytes in a multi-byte value. MPLAB C18 stores data in little-endian format. Bytes at lower addresses have lower significance (the value
is stored “little-end-first”).
So in your example *xptr will reference the LSB and *xptr+1 will reference the MSB. |
H: Why would I use a thermistor instead of a LM35/36?
Is there any advantage over the other in using a thermistor versus a temp sensor like a LM36 or LM35?
The only major difference I can see is that the LM35/36 have voltage limits of 5.5V and the thermistor is more of a really temp sensitive resistor that I could use with any voltage I want. Maybe the temp ranges?
Is that about it?
AI: Thermistors are cheaper and smaller which is useful for applications where many points have to be sensed and little room is available. They can cover a wider temperature range than the LM35. They only have 2 leads and require less power per sensor since many sensors can share a readout circuit. They can also provide higher accuracy if needed. For example there are oceanographic thermistors deigned just for measuring the temperature of seawater that are accurate to less than 0.1 C degree over the -5C to 35C range. They are useful in circuits where a temperature variable resistor is needed instead of a voltage proportional to temperature. |
H: Kicad trace thickness and autorouter
Using Kicad, how do I go about specifying a trace thickness for power rails larger than the default in such a way that this info can be used by freerouter?
A section or page reference into one of the manuals would be fine, I really just need to know where to start reading.
AI: In the PCB-New manual see NetClass. You open the Design Rules menu and choose Design Rules to open the Design Rules Editor dialog. In the list at the top there is one defined NetClass called default. Create a new NetClass called power with the track width etc you need.
Below this is a pair of lists surmounted by combo boxes both defaulted to "* (Any)". Set the right hand one to your new Power NetClass and then move the power pins to this NetClass by selecting them and clicking the >>> button. |
H: Combined chip with LDO and Buck, what use?
I am searching for a Li-polymer 1A charger with a 3V regulated output. I have found Linear LTC355x serie but can't, after reading many "LDO vs Switching" debates, understand the use of both on the same chip. LTC3553, for example, offers a 200mA buck and 150mA LDO.
I need to drop the USB 5V and battery 3.3-4.2V to 3.0V with a 170mA maximum draw. Space on the PCB is a premium so I'd prefer a combined chip. The PCB also contains 868mhz and 1575mhz antennas so a linear converter might be better ?
Should I choose between LDO or buck (I couldn't find a only-LDO or only-Buck chip) and disable the other ? Should the buck be the main source and the LDO the backup/standby source ? Is it only used to output different tensions, for example, 1.2V and 3.0V ?
EDIT : Thank you for your answers. They got me thinking about the VINLDO pin and I was wondering if feeding the LDO from the Buck to obtain a clean output is worth a try ? Assuming :
Battery safe tension range is 3.0-4.2V and VUSB is 5.0V.
All devices on the PCB could use 2.8V.
Buck is needed to obtain an efficient conversion from > 4.0V.
LDO is needed to obtain a clean output.
=> The buck could output 3.0V to the LDO which in turn would produce a clean 2.8V ?
AI: Looking at the application circuit, it seems that they have made on channel an LDO and another a Buck to save cost and component count.
Core need higher current and lower voltage than I/O. Buck works better for that than an LDO.
I/O needs somewhat higher voltage and less current (usually). LDO would be adequate.
Why not 2x bucks? Because a buck requires more components (inductor and better capacitor) than LDO. In this case, the combination of buck and LDO is a compromise of efficiency vs. cost and size.
It's not uncommon to use an LDO to power RF communication, analog front ends, because these can be susceptible to the EMI. At the same time, the digital section can be powered from a buck. |
H: Charging phone battery with a high current source
I am working on a project which the goal is using some android tablets to show pieces of video file to form a multiscreen. One electronic problem that I have faced is:
The tablets should be always on. Currently I am using their own wall adapter which is rated at 5V 2A but when the tablets screen is at 100% brightness their battery eventually depletes. I guess they are not designed for this kind of usage (Google nexus 7). The only solution I could find so far is to reduce the brightness to 30% but this kills the whole idea.
So do you suggest me to try another power source for example I have some LED drivers rated at 5V 4A. Is using this rating with the tablet is safe(considering USB cable and port, charging circuitry and battery itself)? Or the battery charging circuitry of the tablet is limited at specified 2A. I really need your opinion before I try to do something stupid!
AI: I'll assume a single LiIon or LiPo battery - nominally 3.6V, actually <= 4.2V and as low as about 3V. Some devices MAY use eg 2 cells and 7.2V and use a boost converter for charging, but this is rare.
If you are willing to access the tablet battery connections and to possibly use an external charger or even an external battery or power supply then you will be able to run it as 100% brightness continually, subject to cooling issues. See "The easy way" below.
If not, then:
There are two most likely internal circuit arrangements.
Also some variants, but the following two should cover most cases.
In one case - power input goes to device always and only via the battery, a higher current power supply will not help. This is because the internal charger is limited to 2A and the path to the device is via the battery so 2A is your limit.
In the other case - power supply goes to device proper and battery is charged from this point, a higher current supply may help. This is because the battery is "backfed at 2A max by the supply but there is also a direct path from external supply to device.
In ether case it is probable that NO damage will be caused by a higher current supply - but this is not 100% certain.
The easy way
If you can disconnect the battery and feed the battery terminals with a 'virtual battery' with enough current capability then you should be able to operate the device continually at 100% brightness. Say for discussion that Imax is 3A. If you connect a power supply capable of providing >= 3A at anywhere from about 3.2V to 4.0 V to the battery terminals (~ 3.6V best voltage if selectable) then the device will see this as a battery and operate correctly. The supply should be able to provide any startup or other transient peaks (if any) and MUST NEVER provide voltages outside the usual battery range.
The external supply could be a battery plus a charger capable of sourcing >= 3A.
As long as you do not connect power to the device charging input it will see the externally connected power source as a discharging battery. |
H: Boat; How to charge Leisure batteries the first time?
I will have 4 x 12V 100ah leisure batteries for my boat. They will be wired in parallel(using staggered wiring). My alternator is 12V 50amps. The batteries will be charged when the boat is moving. Therefore, best case scenario dictates it will take at least 8 hours for the bank to charge (400/50amps).
The problem is I will not have the luxury of leaving the motor on for 8 hours when the batteries first arrive. At best I will have about 3 hours a day. They will also be used during that time.
The question is will I be seriously affecting the lifetime of my batteries by not charging them fully the first time they are used? Do brand new Marine Leisure batteries ever arrive fully charged?
AI: Lead-Acid batteries are normally delivered in a fully charged state. They are freshly filled with full-strength acid.
If in doubt, charge it onshore using a charger designed for the purpose. |
H: XBee Packetization problem
I've a ATmega368, currently working on top of a Arduino Uno, which its final purpose will be to process some sensor information and send it through XBee (configured as a router) to the coordinator.
So, basically I've a arduino, which its Tx pin is connected to the pin 3 on XBee (configured as a ROUTER AT). All information I'm writing through the Serial.println() method is received perfectly on the coordinator in API ZigBee Receive Packet frames (0x90).
Now, considering that the API 0x90 packet is capable to hold 6 bytes of data, I was expecting that if I limited the output on the Arduino also to 6 bytes I would get only 1 frame with the full 6 bytes. But what it happens is that normally I get the full message sent from the arduino, broken into the several frames.
I think is mostly due to the packetization timeliness (set by RO) of data while waiting on the buffer. But considering I'm only sending data every 5 seconds or so, the packetization period always timeouts.
An option could be to configure the XBee as a ROUTER API, and build up the frame myself.
Any thoughts?
AI: Yes, the only reliable option I have found to avoid packets being split is to use the API mode. There is an Arduino library to help you with that already, xbee-arduino from Andrew Rapp. Don't forget to configure API mode escaped, otherwise the frames will be considered wrong, nothing will be received, and you'll not understand why. On the computer side, the xbee-api library (Java) from the same author is great for debugging.
I have set together a page with the XBee tips and resources I found most useful when I was doing something similar with an XBee. |
H: Physical Implementation of RSA on Embedded Systems
RSA can be used to prevent unwanted software modifications on an embedded system. How can this encryption algorithm be implemented physically? I mean, which peripheral/main units can be used to check the digital signature and how do they do this?
AI: RSA is a software algorithm. You can probably implement it in most microcontrollers, in FPGA and simmilar and using dedicated crypto-ICs.
Googling for RSA will quickly find you the detals of RSA and for implementations for various platforms.
What you use depends on how fast you need it to be, which depends on volumes.
RSA isn't a magic security dust you can add to a product. Creating a secure product is a significant undertaking. You need to consider all the attack vectors. |
H: Which chip to use for automation using Wifi?
I am working on a project where I will be controlling a device using android mobile phone (WiFi as medium). Which is the chip at present in the market which is low cost and effective for my application. the size is of no concern (it can be big).
AI: I am currently evaluating the CC3000 from Texas Instruments. It comes at a rather low price, can be used via its SPI interface and a few demo boards are available as well. My current platform is the Atmel AVR32 architecture. |
H: Programming CC430F5137 using another microcontroller
I have a CC430F5137 and the board schematic all ready for my application to that requires capturing data over bluetooth. I wish to program the CC430F5137 using another platform except the conventional programmer since I'm short of money for the project. Other platforms I have are: Tiva series Launchpad, Stellaris launchpad, MSP430 Launch pad, Arduino, Atmega 32 development board. Please help me by assisting in which to use and how.
AI: If the F5137 supports SBW, then you can use the MSP430 Launchpad as a programmer. Just bring out the two SBW signals plus power and ground to a connector on your board, and connect to J4 on the edge of the Launchpad.
I don't bother with a connector on mine, I use pogo pins to connect to pads underneath the board. |
H: Can't get a precise reading with PT100 temperature sensor
I am trying to get more precise temperature value from PT100 ( 2 wire ) sensor with Arduino.
Here's how my circuit looks like :
When the sensor is about 20 °C I get a value on A5 pin that is 432 and when the sensor is about 30 °C I get a value on A5 pin that is 440. As you can see this is a very small range ( 432 - 440 ).
I want a larger analog signal range, like from 100 to 1000.
Maybe I can somehow obtain the resistance of PT100 and with some math formula to get the temperature.
What can I do with my circuit to get a more precise temperature reading ?
AI: You can use an external IC that takes care of reading the sensor and then get the result digitally, for example MAX31865
Maybe I could somehow obtain the resistance of PT100 and with some math formula get the temperature?
You can find resistance tables in internet , you can create a table in flash and use interpolation to calculate the temperature based on the resistance you get. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.