text
stringlengths
83
79.5k
H: Is the small signal model of a BJT amplifier valid for any sinusoidal signal? According to the small signal model, a CE amplifier amplifies only the signal component of the input voltage that causes small perturbations around the operating point. My question is if we put say a > 1 V sinusoidal signal on the base terminal, then wouldn't the perturbations around the Q point be much larger than the mV range the small signal model requires? How does the transistor amplify this if this is the case? AI: How does the transistor amplify this if this is the case? The transistor doesn't care much about human ideas of what it ought to do. It just does it. The models are ideas in human brains that make it easier for humans to understand nature. When the signal is large, the small signal model doesn't apply, because it is - as the name implies - a small signal model :) The "small signal" name is convenient but not fully descriptive: we can definitely set up the transistor in operating conditions such that the small-signal model will work for signals much larger than "mV range". You'll need either high operating voltages and fairly large resistances, or external current sources or cascode configurations, and negative or feed-forward feedback helps too. Discrete power audio amplifiers do their best to operate the transistors such that they are quite linear even with signal amplitudes of tens of volts. With global negative feedback, these transistors end up operating almost exactly how the small signal model predicts, at power levels of tens, hundreds, or even thousands of Watts. Those circuits aren't very simple anymore, but the model applies - because the desired behavior is linear in spite of high signal amplitudes and currents. If you're actually asking what model will work for signals larger than the small signal model can handle in a given circuit? The next "better" model that is meant to work with larger signals in usual amplifying transistor configurations found in textbooks is the Ebers-Moll model. As the conditions of use of the transistor become more "complex", i.e. stray farther away from textbooks and simple circuits, or if we need higher accuracy that the simplest models provide, even the Ebers-Moll model becomes inadequate, and more advanced/higher-fidelity models are needed. The modern models of transistors used in circuit simulation ("SPICE" and similar tools) are complex enough that sizable monographs describe their derivation and use, and many tens of parameters are needed to model the real device. The state of the art 50 years ago (mid-1970s) was already advanced enough that Ian Getreu's "Modeling The Bipolar Transistor" book is about 250 pages long. The same depth of coverage of a modern model would be easily 4x as long. That's pretty much the case with any sort of an engineering model. In mechanics of materials, linear elasticity is the "small deflection model", beyond that you end up with various models of non-linear elasticity, plasticity, fracture mechanics, and so on. Even those have finite application ranges. If you were to slam a relativistic (very, very fast or energetic) baseball into a chunk of steel, none of the usual engineering models will work - you'll need quantum-electrodynamical models :)
H: How does this step up converter work? Just bought an adapter to use an ATX psu as replacement to an HP proprietary one. It has a step-up module which converts the ATX 5vsb to 12vsb. I just want to know what the extra 12v input is for and if it can be removed. It seems like it passes the 12v rail together with the step-up output by a diode. Thank you AI: Looks like it's a diode-OR function. I would assume the connection is in case there's already a 12V standby present in the system. Assuming there are no new connections on the backside, of course. But the real answer is: read the manual. If there's no manual, guide, spec sheet... then it's anyone's guess how it's supposed to work.
H: How to control the amplitude of a Wien-bridge oscillator? I am trying to design a Wien-bridge oscillator and understood the basic working principle of it. Now I got to the point where I want to add a system which would make the gain >3 during startup and <3 when exceeding the target amplitude range. Apart from inaccessible textbooks, I could not find a proper explanation of the JFET variant. I understand the incandescent bulb one, but I feel it is not practical enough and it is difficult for me to simulate it. Can anyone provide a simple circuit with adjustable output amplitude (and possibly an explanation)? EDIT: This is a working schematics of what I found online: from what I understand, the JFET stays open initially (really small resistance), then once the output voltage starts swinging below -0.7 V (or the treshold voltage of the diode), capacitor C3 starts being negatively charged due to the diode conducting. At that point the Jfet starts turning off (increasing its resistance), thus reducing the overall gain, until a stable point is reached. What I don't get still is: The function of R8 The function of R7 Is there any way to calculate the values of the components for a required amplitude? Or is it trial and error? AI: If you search for 'wein bridge oscillator schematic' (without the quotes) and go to images, you will get hundreds of schematics using light bulbs, bipolar transistors, JFETs, MOSFETs, photo-conductive cells, etc. Many of these images are from pages with design tutorials. Here is an example from this site: Great Voltage Stable Oscillator Other examples: https://www.industrial-electronics.com/electrnc-dvcs-9e_16.html https://www.eleccircuit.com/wien-bridge-oscillator-circuit/ UPDATE for the additional questions. You are correct about how and when C3 is charged. However, you don't want the JFET to track the sine wave instant by instant. You want the control loop to respond to the value of the wave amplitude averaged over many cycles. R7 and C3 form a lowpass filter that does this averaging. A DC voltage (with some sine-frequency ripple) accumulates on C3. R7 limits the current into C3, setting how quickly the stabilization loop responds to changes in the output amplitude due to, for example, changes in the load. If the output increases (again due to a change in the load), something has to discharge C3 and reduce its voltage, to increase the JFWTs channel resistance and decrease the output amplitude. R7 cannot do that because it D1 is reverse-biased when the output is too large. Enter R8. It is the discharge path for C3. How quickly the loop responds to an overvoltage and an undervoltage are called the attack and release times. They are semi-independent. R8 and R7 form a voltage divider, so they interact in setting the times. For the values in the schematic, R8 is almost 10 times greater than R7. This means that the loop will respond more quickly to an overvoltage, when most of the current through R7 is going into C3 (but a small amount is bypassing C3 through R8), than to an undervoltage, when C3 discharges solely through R8. A characteristic of any R-C circuit is a phase shift or time delay. The larger C3 is, the more smoothly and slowly the circuit will respond to amplitude changes. With a small C3, a step change in the output load will cause the output signal to overshoot, then undershoot less, then overshoot even less, etc. until the output amplitude stabilizes. In effect, the control loop is its own damped sinewave oscillator.
H: Power electronics on a PCI/PCIe expansion card I'm not sure if its been done before but how feasible is the idea of building an computer expansion card (PCI/PCIe) that carries mains power on some areas of the expansion card being switched by relays, TRIACs or IGBTs mounted on the board that talks to a controller on the board and/or the PCI bus to the computer for power, command, control and data acquisition? The closest thing I have found is the PCIE-1765 by Advantech. It has a few relays and tantalum capacitors which seems geared towards interfacing with power systems and electronics. But however what I want is the metal slot at the back serve as connectors for the mains power. Mains power to the board would be carried in using power connectors such as a female IEC C7 connector which is compact enough not to exceed the slot height, and I envision a power input and output connector for source and destination Apart from the obvious electrical safety issues (assuming certain precautions like mains power isolation and proper earthing will be undertaken, to make matters worse, IEC C7 does not have an earth connector), is there a reason why such power electronics switching devices cannot be or would have challenges implemented on a computer expansion board? AI: The linked card is for a DAQ rack where potentially hv measurements can also be present. It is not a pci card for a PC. (personal computer) But for specialty gear. Aside from the safety challenges: Risk of shock. The need for multiple mains sources. There will also be significant EMI challenges: Switching noise of the relays. Conducted noise from the attached signals. Transient immunity? What if it sparks to a sata cable nearby?
H: Why do some circuits work in LTspice, but not in Proteus? Why do some circuits work in LTspice, but not in Proteus? I was playing with my amplifier circuits in LTspice by changing the values forth and back, I had calculated. I was thinking about a special amplifier circuit and decided to omit the input signal and let the circuit work with only one DC source. The result was so, as I had configured an AC signal from DC signal. Then I was curious, if this would work in Proteus, too. I gave a try with that and ended in ruin. It was not running in Proteus. Proteus gave me a permanent straight line of DC voltage. I was researching why this problem occurred. Some say, that LTspice had implemented a noise in DC source, therefore it should work in LTspice. I begin mistrusting LTspice and Proteus. Can you tell me which simulation of these software is more reliable for amplification of signal processing than the other one? Below is the circuit. The picture is from LTspice. Version 4 SHEET 1 916 1288 WIRE 368 -64 112 -64 WIRE 112 -48 112 -64 WIRE 112 -48 -208 -48 WIRE 112 0 112 -48 WIRE 112 0 64 0 WIRE 160 0 112 0 WIRE 64 80 64 0 WIRE 160 80 160 0 WIRE 368 160 368 -64 WIRE 64 192 64 160 WIRE 128 192 64 192 WIRE 160 192 160 144 WIRE 160 192 128 192 WIRE -208 208 -208 -48 WIRE 128 384 128 192 WIRE 528 384 128 384 WIRE 544 384 528 384 WIRE 128 448 128 384 WIRE 240 448 128 448 WIRE 128 480 128 448 WIRE 240 496 240 448 WIRE -208 528 -208 288 WIRE 64 528 -208 528 WIRE 128 608 128 576 WIRE 240 608 240 560 WIRE 240 608 128 608 WIRE -208 656 -208 528 WIRE 128 656 128 608 WIRE -208 864 -208 736 WIRE 128 864 128 736 WIRE 128 864 -208 864 WIRE 128 912 128 864 WIRE 208 912 128 912 WIRE 368 912 368 240 WIRE 368 912 208 912 WIRE 208 944 208 912 FLAG 208 944 0 FLAG 528 384 antenna IOPIN 528 384 Out DATAFLAG 128 288 "" SYMBOL cap 144 80 R0 SYMATTR InstName C1 SYMATTR Value 2.2p SYMBOL ind 48 64 R0 SYMATTR InstName L1 SYMATTR Value 1µ SYMBOL voltage 368 144 R0 WINDOW 123 24 124 Left 2 WINDOW 39 0 0 Left 0 SYMATTR Value2 AC 1 SYMATTR InstName V1 SYMATTR Value 12V SYMBOL npn 64 480 R0 SYMATTR InstName Q1 SYMATTR Value BC547B SYMBOL res -224 192 R0 SYMATTR InstName R1 SYMATTR Value 1k SYMBOL res -224 640 R0 SYMATTR InstName R2 SYMATTR Value 470 SYMBOL res 112 640 R0 SYMATTR InstName R3 SYMATTR Value 1k SYMBOL cap 224 496 R0 SYMATTR InstName C3 SYMATTR Value 3.3p TEXT 56 968 Left 2 !.tran 20us TEXT -432 992 Left 2 !;ac lin 500 100Meg 130Meg AI: I begin mistrusting LTspice and Proteus. As you well should. With circuits that are fairly complex, you must understand how they work on your own. You should be able to say what the circuit is doing more-or-less before even trying a simulation. The simulation should be there to point to any errors in your reasoning, and to give quantitative results. Circuit engineering can be done with "simulator in the loop", but that requires some experience. Can you tell me which simulation of these software is more reliable for amplification of signal processing than the other one? They are both equally good, when used correctly. Your mistake is assuming that SPICE can always start an oscillator. Don't assume that, and instead give the oscillator a little "kick" to get it going in SPICE. You can validate reliable startup later on the bench. Or you can use SPICE to show that only the DC operating condition that supports the oscillation is stable. The other DC operating conditions where the circuit doesn't oscillate should be unstable, i.e. the circuit should "get out of them" by itself. In fancy terms, the DC phase diagram of the circuit should have only one attractor, and that DC operating point should provide enough gain to maintain the oscillation condition. Sometimes, the only "kick" needed to get an oscillator going is to start the simulation from zero initial condition, and not from the stable DC operating point. After all, that's how the circuit will be used in practice. It doesn't just "drop into existence" at the final DC operating point. Instead, it's powered up, and the supply rail goes from 0 to nominal value over some finite time. If this "zero initial condition (0 IC)" approach doesn't work, you can inject a low amplitude current pulse into the circuit. The pulse can go from say 10uA down to 0uA after some small fixed time on the order of the oscillation period. That way long-term, there's no current injection, and the transient from 10uA down to 0uA is what "kick-starts" the simulation. Real circuits usually are noisy enough that they get oscillating by themselves. Now, perhaps the oscillation is unintentional, so your question is the inverse: "how to prevent this thing from oscillating, I want an amplifier, and I want to trust the simulation that tells me there's no oscillation". For that, you need to do a loop gain analysis, whether in frequency or time domain, but there's no way around it. Look at Barkhausen stability criterion and go from there. Some people (myself included) find it easier to reason about loop response in time domain rather that frequency domain, so don't think that a Bode plot is the only way to visualize things. Impulse response provides same information in time domain.
H: "Dirty" response from paired TSOP4838 IR receivers I have connected the outputs of two TSOP4838 IR receiver modules using diodes to prevent reverse current. The idea is to mount them on a moving model vehicle at 180 degree orientation to one another so that the field of reception is closer to 360 degrees. If this is a poor idea and there is a better solution I would be interested to know. The schematic is shown. simulate this circuit – Schematic created using CircuitLab The output at both points A, shown on the scope capture is very clean as expected: However at point B the output signal is "dirtied" somewhat: Can anyone explain to me why this is, and if there is a method of "cleaning-up" the output signal so that it closer matches the output at points A? Note: I did not necessarily press the same button on the IR transmitter to generate the signals at points A and B - it's the "shape" of the output waveform that is of interest. Datasheet: Vishay TSOP4838 AI: There are two problems with the circuit you presented: The slow fall of the signal is as described by other answers. The output is floating when both sensor outputs are low. You need a pull-down resistor. Something like 10 kohms is suggested. (As @RickyBoy noted in the comments the internal pull-up of 33k does not allow that so a higher value such as 100k is needed) The diodes are the wrong way round. As your traces show the no-signal value is a logic high, it goes low when there is a signal. If one sensor receives a signal and the other does not the logic high from the no-signal sensor will win and you will get just a sustained logic high at the output. You need to reverse the diodes and put a pull-up resistor at the output for it to work as you require. Again 10 kohms is a suitable value. As the internal circuit of the sensor is a transistor pulling down with a passive pull-up the same effect can also be achieved without using diodes by just connecting together the outputs of the two sensors. In that arrangement an external pull-up resistor would also not be needed as the internal ones should suffice. Image from Vishay Semiconductor TSOP4838 datasheet
H: Should I throw these boards with soldering issues out? I've just baked some boards. The first two looked normal, but upon closer inspection I noticed that paste did not melt under one chip with exposed pad package. I used a slightly different profile with 30s longer preheat and 10s longer peak (both are leaded profiles) for the rest of the boards. Now there were no issues with paste, but two capacitors from different manufacturers received a faint tint on their pads, one pink, another blue. Below are photos from first and last board. Should I just throw these boards out as over-cooked? Should I replace the capacitors and see if they work? Should I just ignore the capacitors and try powering the boards? UPDATE: for the reference, the paste was TS391AX50, about a month left on shelf life. AI: The bottom picture has sputtering on the solder, this happens when the flux has issues. Incorrect solder masking or using solder that is past its shelf life. It's not great to have solder like this as sometimes the components don't solder, war with fine pitch components like a 402s or below you can get solder balls creating solder bridges. It really depends on how much we work you want to do to get these boards back up to spec and how much inspection you want to do to make sure that they work. I probably would not send these into a production environment but if you're just testing it would probably be acceptable if you rework the boards I've reworked boards before like this so I didn't have to make new ones, but that takes time.
H: Issue with influence between channels in a dual-channel high-side current sensing circuit I've designed a two-channel current sensing circuit integrated into a board that outputs two separate signals. However, I'm encountering an issue where one channel seems to influence the other, which, to my understanding, should not happen. Each channel in this circuit has its own current sensing component, but it appears that the operation of one channel is affecting the other. Additionally, I've observed that when there is no load connected to either channel, the circuit functions correctly. However, under load, the behavior changes. Specifically, when a 5 A load is applied to only one channel, the voltage readings are as expected, but when one channel has a 5 A load and the other has a 1 A load, the voltage readings across the channels are not as expected. This discrepancy in voltage readings under different load conditions is puzzling. I'm trying to figure out the root cause of this problem and understand why the voltage readings are affected in this way. Could this be a result of some form of interference, a design flaw in the circuit, or something else? Furthermore, if there is a consistent relationship between the interactions of these two channels, I would like to know what formulas or calculations can be applied to predict or compensate for this influence. Any insights or suggestions on how to diagnose and resolve this issue would be greatly appreciated. AI: If the layout follows the schematic, current going to the output on the bottom creates a voltage drop on the trace resistance, which will be measured by U2. It should be routed like this instead, preferably with a note on the schematic. If U2 is some distance away from SR1, then the two sense lines should be routed as a differential pair. To have the design rule check enforce it, you can put a net tie which makes the two sense lines into different nets from the power traces. This is especially useful if the power traces are copper pours, to prevent the sense lines from being connected to the wrong place. Layout around the sense resistor is important for accuracy too. EDIT: It would be useful to calculate the resistance of the shared trace: Here's a trace resistance calculator. I can't see the dimensions, but you can get width and length easily from your layout software, and copper thickness from your fabrication specs. It should be a good chunk of a milliohm... since your current sense resistors are only 2 mOhm, that would introduce a significant error. Likewise, since your sense traces do not go to the 2mOhm resistors, but to the power traces some distance away from the resistors, the effective resistance value should be higher than 2 mOhm. So your current sense amps should have higher gain than intended. Maybe =50%...+100% gain error, but I have no idea what copper thickness you used, so it may be different. Knowing the trace resistance, you can calculate how much error should result (both on the gain and on crosstalk) and check the readings to confirm. On the prototype board you can cut the sense traces and replace them with hand soldered wrapping wire, soldered directly between the inputs of the sense amps and both ends of the current sense resistors. If that solves the problem, you know what to do for the next layout revision. I don't see the filter caps C7/C9 on the layout... they should be close to the ADC's ground pin. If current flows in your ground plane, that will create voltage drops, so "ground" is not at the same potential everywhere. The current sense amps output a "ground"-reference voltage, but the reference they use is their own ground pin, wherever it is on the board. So if that "ground" is not the same potential as "ground" the ADC uses as 0 Volts reference, you have a bit of extra noise. It's the same for the capacitors, if your filter cap is on a noisy part of the ground plane, like near power devices or DC-DC converters, it will inject the ground noise into the signal it's supposed to filter.
H: Does a single input differential amplifier remove noise? It is known that the purpose of using a differential amplifier is to remove noise from the input signal, apart from giving some small amplification. To my knowledge, there are two inputs and two outputs in a differential amplifier. Usually, two inputs are given to the circuit, one is the original signal, and the other is the "flipped" version of the same, this is the case in the dual input differential amplifier. But we also have single input differential amplifiers, where one input node is grounded, and the other input is the original signal. Does such a configuration remove the noise from the original signal. If yes, I have another confusion. There are two outputs in a diff-amp circuit. If we measure each output individually (with ground as the reference), does it have noise removed or not? Or is it only when I take the difference of both the outputs when I will see a output without noise? I am very confused. The context is that I have to construct an Audio Amplifier with a mic input. My stages of the circuit are Diff-amp --> CS Amplifier --> Filter --> Power Amplifier --> Speaker. The issue with taking the difference of the outputs of the differential amplifier is that the input to my CS Amplifier is just one Vin and not the difference of two signals, which makes it hard to design the circuit. Am I missing something or how do I go about designing this? AI: Once you understand the way that the differential input "removes" noise from the input, the answer to this is clear. If we call the two input nodes (which are together considered to be a single "input" and I will stay with that convention here) A and B, the differential amp (and a mic amp is a very good example) has a high gain to the input difference (A - B). At the same time, it has low gain (maybe even attenuates) the "common mode" input (A + B). So if the output is C: C = G(A - B) + R(A + B) where G is the "differential gain" and R is the "common mode gain" (or rejection) What matters here is the difference between G and R, the "common mode rejection ratio". Let's work through take some typical numbers for a mic amp with a "standard" mic connected (if there is such a thing, but that is another topic): K might well be 40dB (a voltage gain of 100). R might be 0dB (a voltage gain of 1). So the rejection ratio is 40dB. Let's say that our mic signal is 5mV differential, but 1mV of common mode noise gets in to our 10 metre microphone cable (which will be a balanced twisted pair with an overall screen connected to 0V at the mic amp end). This means that the noise is 20% of the signal at the input to the amp. At the output of the mic amp we still have 1mV of noise, but we amplified the differential audio signal by 100, so we have 500mV. At the output of the amp the noise is now only 2% of the signal. We have effected a massive improvement in signal to noise because of our rejection ratio. Now to answer your question : this improvement relies on keeping the induced noise equal on the two legs of the differential input. If we connect the B input to 0V for example, so that it no longer receives the same common mode noise as the A input, the noise will be amplified by 100 (as it is on the A input but not the B) and we will not achieve the improvement in signal to noise ratio. So from this, we can deduce the important factors in amplifying a microphone signal with best signal to noise: a mic amp with excellent common mode rejection ratio (high differential gain, low common mode). a mic amp with well matched impedance to ground on A and B inputs (if this is not the case, the induced voltages from the external noise will differ). a microphone signal which is differential (it does not really matter that much that the A and B legs are equal and opposite, but there must be a difference signal between A and B). the induced external noise should be equal on A and B inputs. (That is why we use a twisted paid - to try to ensure that each leg has the same induced voltage. Screening helps reduce the overall magnitude of noise, but in fact using a twisted pair may be just as important. Older analog professional audio systems often used multiple unscreened twisted pairs to carry multiple line level signals between equipment for this reason.)
H: Doubt in applying KVL I want to apply KVL to loops abcda and befcb in the circuit given below. Is there any role of the 2A current source in those two loops, that I need to consider before applying KVL? AI: Is there any role of the 2A current source in those two loops, that I need to consider before applying KVL? If you convert the 2 amp source and the 5 Ω parallel resistor to an equivalent 10 volt voltage source (Thevenin and Norton theorems) you get this: - And clearly there is a role played by the 10 volt source hence, there is a definite role played by the 2 amp source. Can you see that you can simplify the equivalent circuit by combining the 5 and 10 Ω resistors and find a solution with more ease; there are only two loops now for instance. However, I'd continue using source transformations (Thevenin and Norton theorems) until you had the voltage at node b with respect to node c.
H: PC power supply bench breakout board. What is -12V used for? I bought a useful PC power supply breakout board, fused. It has the following outputs: -12V +12V +5V +3.3 What is -12V and how would I use it? The -12V has its own negative. AI: -12V is a negative 12V output. Used for something that needs a negative supply in addition to positive supply, for example a dual supply op-amp powered by +12V and -12V. You can use it for anything you want or not if you don't need it. It will roughly provide 1A max anyway. And no the -12V does not have a separate ground. All supplies are referenced to the only ground, the black wires on generic ATX supply. What you have on your adapter and how you interpret what it meand can of course be different.
H: If RF signals require matched impedance to remove reflections, why do approaches to shunt noise need only a “good path to ground?” From my understanding, for signals to propagate without reflection the impedance of the path must be constant and can be shown via the telegraph equation. When it comes to noise on the line, for example the power to an IC, you're told for high frequency noise to have a solid, nearby path for the capacitors to shunt the noise. I'm assuming the high frequencies in this case see a low impedance to ground through these capacitors. How does this apply with the idea of matching impedance? If an IC or whatever is generating the noise, and it sees this low impedance path, why does it not just get reflected back? AI: Noise and reflections are different phenomena requiring different treatment. While reflections back and forth super-impose to become what could be considered "noise" by the receiver, it's generally not something you can "filter out" in the frequency domain (with a low pass filter, for instance). Reflected signals have the same frequency components as the original unadulterated signal of interest, so once they've mixed in with the original signal you can't then selectively remove "certain frequencies" in the hope of being left with the original, without killing the original too. The only solution is to prevent reflections as much as possible, across the entire frequency spectrum, which requires proper impedance-matched line termination. The frequency spectrum of random noise, on the other hand, is broad, and presumably stretches beyond the spectrum of the signal of interest, in both directions. Therefore this is an element of the signal that can be attenuated by removing frequency components that fall outside the spectrum of components of the original signal. So, to diminish the effects of random noise, a band-pass, or low-pass filter can be employed. As to your question about (I presume) power-supply bypass (or "decoupling") capacitors, usually an explanation is with reference to the lumped element model of things, but I'm going to attempt a crude wave-based explanation, in terms of travelling potential waves and reflections. An IC that produces sudden transients in its demand for supply current, and whose supply nodes are not bypassed with a capacitance, will cause large voltage fluctuations between those supply nodes. Those fluctuations will travel along the supply lines just as any signal would along a transmission line. There will be reflections back and forth, too, with superposition of potential waves occurring at all points. These reflections are a nightmare for anything else connected to those lines. So, we bypass the supply, near the IC, with a capacitor. At high frequency there is now a dead short-circuit across the IC's supply terminals, which effectively removes that IC from the rest of the circuit. (By short-circuit, I am referring to the extremely low impedance of the capacitor, not the physical length of the transmission line between capacitor and IC). That short-circuiting capacitor absolutely does reflect any incident voltage fluctuations, but being a short-circuit (at frequency), the reflected signal is 180° out of phase with the incident signal. In other words, any signal emerging from the IC at its power terminals, and which reaches the bypass capacitor, is immediately reflected by that short-circuiting capacitance, 180° out of phase with the incident wave. The reflected waves travel back towards and inside the IC. They interfere destructively with waves coming the other way, resulting in a net superposition that cancels all voltage fluctuations. For example, if the IC tries to instantly lower the voltage across its power supply from 5V to 4V (by suddenly drawing a lot of current), the new 1V lower potential difference travels towards the bypass capacitor, and is reflected 180° out of phase, becoming a 1V higher potential difference, which when superimposed onto the outgoing wave, restores the potential difference to 5V. Of course, this only works if the distance between bypass capacitor and IC is small enough that the reflected signal arrives back at the IC practically instantly. If that distance is too great, then the reflected wave will take time to come back, and there will be enough time for the IC's power supply potential difference to fall to 4V and stay there for a while. As I said, that was a crude explanation. In reality, the transition from 5V to 4V won't be instant, and the waves I talk of will have components at many frequencies and amplitudes. The superposition won't be a simple \$(+5V) + (-1V) + (+1V) = +5V\$, it will be the sum of goodness knows how many waves. The result is the same though. A capacitor across a transmission line will tend to keep the potential difference constant, by reflecting incoming waves 180° out of phase, which interfere destructively on their way back.
H: ESP32: unexpected voltage readings on EN and IO0 I am designing a PCB for the first time with the ESP32-PICO-V3 chip, but the chip doesn't boot. The chip is connected to the rest of the PCB as seen in the simplified diagram below, where I measured the voltages at EN and IO0 during my troubleshooting. When I give my PCB power (external 5 V input that is regulated to a stable 3.4 V with a voltage regulator), I measure 2.37 V at the EN-pin and 1.8 V at IO0. From my understanding, these pins should have values closer to 3.3 V, but at least above 2.45 V to be considered "high" and boot properly. So far I have taken out the capacitor in the RC circuit at the EN-pin to check if that corrects the voltage drop, but it doesn't. I also measured my MTDI and MTDO pins, which are 0 V (indicating that the internal regulator operates at 3.3 V) and 1.8 V (not sure why) respectively. Does anyone have advice on how I can further troubleshoot and fix this problem? For further clarification, I say that the chip doesn't boot because I can't communicate with it over UART. The IDE fails to connect to the board. AI: I fixed this problem after making two changes. The first, which I suspect might be the biggest fix, is that I decreased the soft-start delay of my voltage regulator to almost nothing. Secondly, like @Andyaka mentioned, I connected VDD3P3_RTC to 3.3V. Not only did I measure 3.3V on the EN and IO0 pins, but I can now successfully communicate with my ESP32 as well.
H: What is the exact reason for grounding (earthing) of a transformer? I have some misunderstandings with the grounding of a transformer. As far as I know, the utility transformer's (the last transformer before my mains plug) neutral point (star-connected point of 3 phases, this image is probably from the US) connects to the earth with some rods. After that a green-yellow protected earth wire pulls through the mains plug. The reason for this is primarily "safety", is said, because if the live wire accidentally touches the chassis of a household appliance (like a refrigerator), this low impedance connection allows passing ground fault current through its own and RCD device triggered. I understand the PE cable that is through the main plug to the transformer is critical from a safety perspective. But, why are we earthing the neutral point of the transformer? Maybe for lightning protection? I question this because if this earth connection did not exist, we would not be shocked when the live cable touches any chassis (someone argues we are shocked in any case because of capacitive coupling between earth and other phases, please tell me if that is right). The fundamental rule of circuit theory is "current must return the source", hence the discourses like "the earth is a global charge pool and can sink/supply all current and supply global 0 V reference" must be wrong (the potential difference that causes lightning is already between the earth and the clouds). To sum up: why are we earthing the neutral point of the transformer? Please, let me know if there's anything I missed or don't know. AI: The reason for this is primarily "safety", is said, because if the live wire accidentally touches the chassis of a household appliance (like a refrigerator), this low impedance connection allows passing ground fault current through its own and RCD device triggered. Correct if this earth connection did not exist, we would not be shocked when the live cable touches any chassis For the same reason as above: simply replace the appliance in your home with the utility transformer, and replace the breakers in your home with the breakers in the power station. If the utility transformer is damaged and gets a short between primary and secondary, or via the chassis, then high voltage from the primary (2-35kV) will end up on the secondary side. If the secondary side is Earthed, then the same thing happens as you described: an abnormal current flows between primary and ground, and breakers trip on the primary side. Neutral on the power station transformer is also Earthed for the same reason: if the high voltage power lines touch the ground, or a crane, or if a tree falls on them, then abnormal current will flow and breakers in the power station will trip. Without this, a faulty transformer could potentially send kilovolts into your home without it being detected. At this voltage, it's pretty much instant death.
H: How on Earth can a solid state relay be constructed? Relays are very handy components. Relays allow current to both directions on the controlled load circuit so they can be used for both AC and DC. However, relays have a finite lifetime if connected/disconnected often, produce a loud audible click and consume power in the controlling circuit because of the electromagnet. Now, when replacing a relay with a solid-state component, you might want to: Use a BJT. However, BJTs despite their name are unipolar devices, they conduct current in only one direction. Use a MOSFET. However, MOSFETs have a body diode which means current in the wrong direction is passed always despite the switch state. Use a gate-turn-off thyristor. However, it conducts current in one direction only. Use a TRIAC. It conducts current in both directions. However, it requires AC, because the only way to turn it off is to wait for the commutation of the power line signal. Is there such a thing as a solid-state relay that could replace a relay in practical applications? Requirements would be that it conducts current to both directions and can be turned on and off electronically, and ideally control would be as simple as it is for a relay. I know that in most cases, you know if you have AC or DC so you can choose between (1)-(3) and (4): if it's DC, use (1)-(3), if it's AC, use (4). A gate-turn-off TRIAC might be close to meeting the requirements but might require more tricky control than MOSFET/BJT. Is such a component as a gate-turn-off TRIAC even theoretically possible? Or can a practical solid-state replacement for a relay be constructed as a circuit from simpler components? The Wikipedia page of solid-state relays does in fact mention TRIAC types that work only for AC (so it doesn't satisfy the generality requirement), failing entirely to mention the existence of a gate-turn-off TRIAC. It also mentions two back-to-back MOSFETs with source pins connected together but I have some trouble understanding whether such a component can actually replace a relay, as the control mechanism might indeed be very tricky and might not work in some applications where relays satisfy an isolation requirement. None of the top Google search results mention how a useful solid-state relay might be constructed, apart from the Wikipedia page which I have trouble understanding (how on Earth are two back-to-back MOSFETs controlled in practical applications?), and quick search of this Electronics StackExchange forum doesn't give any useful indications on how a useful, practical solid-state relay working for both AC and DC might work. AI: Is such a component even theoretically possible? Of course. Use two MOSFETs back-to-back with sources connected and driven by a photovoltaic opto-coupler like this onethat can switch off a speaker: - Or can such a component be constructed as a circuit from simpler components? This is a typical solution for switching AC loads (pin B is not normally needed except if you build it yourself and want to access it): - Image from here. Choose the MOSFET rating carefully and you should be in business but, take care because high-ish voltages can be very dangerous.
H: Sensing plugged in 3.5mm jack with the TRRS contacts I am using the pictured TRRS connector and want to detect whether a plug is plugged in via the builtin switches (pins 2/3/4). However, the switches seem to be connected to the audio lines as well. Surely connecting a resistor divider/voltages/other sensing would affect the audio signal? Preferably I don't want to use "overkill" ICs - I don't need any more amplification. I've seen the TI TPA6166A2 mentioned that can do this, but it only seems to be available in BGA (and is far more complex than I think I need). (However, I am willing to find a hand-solderable IC if it is the only way.) Other answers have mentioned the switches but not specifically how to use them to detect a plug without affecting the audio signal. What sort of circuit would let me detect these switches opening in this way? (I have bridged together Left and Right channels because my audio source is Mono. I have designed this for "Apple"/AHJ pinout jacks.) AI: The thin rectangle between pins 4 and 6 is probably an insulator that provides a mechanical push to open pins 3 and 4. When the plug is inserted, pins 3 and 4 open. Connect one end of a resistor to pin 4 and the other end to 5V. Connect pin 3 to ground. When the plug is inserted the voltage on pin 4 will switch from 0V to 5V. This will not affect the audio signal. There are many other ways to use this switch.
H: Replace BJT with a MOSFET for a mini 24 V motor I have a mini 24 VDC motor (RH-370CC) with the following specs: It is controlled by an ESP32 (3.3 V output). My initial design was the following: I know, I made a poor choice with the SS8050 transistor. It worked well in the early tests but it lasted a few seconds when the motor didn't have any load. Is there any MOSFET in a TO-92 package that I can use to replace the SS8050 transistor? Unfortunately I already made the PCB. AI: Not a direct answer to your question, but a solution to your problem: You're not driving the transistor hard enough. With 470 Ohms, you'll get about 5mA of base current with 3.3V from the ESP32's GPIO. Ideally, the base current should be at least 1/10th of the collector current. Your motor might need 200mA or so to actually spin up, so you'd aim for 20mA base current. This means that it'll likely work fine if you decrease the base resistor to 100 Ohms. The ESP32 can handle this, too. You might also want to consider using a higher voltage transistor, i.e. a BC337. The SS8050's collector-emitter breakdown voltage is uncomfortably close to your 24V supply rail. If you want to use this circuit for PWM, you should also remove C2 and drastically lower the value of C3 (to i.e. 10nF). They will otherwise cause excessive power dissipation in the transistor. Regarding a suitable MOSFET: I checked Digi-Key, and they do not currently have any MOSFET in stock that would work for your application. The big problem is that you only have 3.3V available to drive the FET, which means that you're looking for a high current logic-level FET in TO-92. Those are rare and very hard to obtain because it's all SMD now. A BJT is the better choice here.
H: What's wrong with my FET transfer function? The following is from the book Design of Analog CMOS Integrated Circuit, page 207. Find the transfer function of the circuit in Fig. 6.47(a): Suppose the voltage at node A is \$V_x\$. with the use of KCL at node B, $$V_x g_{m1} + \frac{V_{out}} {r_o} = (V_x - V_{out}) s C_F - \frac{V_{out}} {R_D}$$ For \$V_x\$, with voltage divider (\$R_S\$ and \$C_F\$). $$V_x = \frac{V_{in} - V_{out}} {R_S + 1/(s C_F)} \frac{1} {s C_F} + V_{out} $$ $$\Rightarrow \frac{V_{out}} {V_{in}} = \frac{(s C_F - g_m)} {s C_F R_S (g_m + 1/r_o + 1/R_D) + (s C_F + 1/r_o + 1/R_D)}$$ However, the arthor gave this. $$G(S) = -g_m (R_D \parallel r_o) \frac{1- \frac{1} {g_m} C_F s} {1 + \biggr[(1 + g_m R_D) R_S + R_D \biggr] C_F s}$$ The zero is the same but the pole is different. AI: If you check the book's derivation, you can see that the author ignores \$r_o\$ in their derivation of the driving point impedance. The exact \$Z_{in,0}\$ should be \$(1 + g_m \cdot R_s) \cdot (R_D \,||\, r_o) + R_s\$. Also, whether you can ignore \$r_o\$ or not depends on the relative values of \$R_D\$ and \$r_o\$. As you can see, there is the term \$R_D \,||\, r_o\$. If \$R_D\$ is much smaller than \$r_o\$, you can ignore \$r_o\$. In some cases, for example, if the load \$R_D\$ is a current source which is comparable in magnitude to \$r_o\$, then you can't ignore \$r_o\$. Hopefully, from this example, you can see the usefulness of the Extra Element Theorem (EET), which can provide more insight than the brute force method you used.
H: Equivalence in circuits I'm studying circuit theory, with only linear elements and no capacitors or inductors or mutual inductances, so no differential equations. All the sources (current and voltages) are constant. To be even more precise, that means that resistors, ideal current/voltage sources, dependent current/voltage sources, op-amps, ideal transformers are all and the only elements allowed. QUESTION My question is: is it always true that, if we have to find a current/voltage in B, which is a circuit with whatever inside it (but that must satisfy the premises said before), thanks to the short circuit we can solve the circuit as if there was no ideal current source (the one on the left)? In other words, are the two circuits below equivalent, if my goal is to find a current/voltage in B? FURTHER CLARIFICATIONS/MORE CONTEXT The question originated from a exercise done by my professor: given the circuit on the left we need to find the voltage i_x and he did that by simply ignoring the current source, so it means that current source in parallel with short circuit is equivalent to the short circuit. AI: Almost. Circuit B can be analyzed as drawn. There is still a current source. I would draw it this way. The diagram demonstrates more completely. simulate this circuit – Schematic created using CircuitLab Update Yes you are correct. The currrent \$i_x\$ depends only on on the elements inside the box and the short. The short makes \$R_2||R_3\$. All of \$I_S\$ flows through the short also, but has no effect inside the box.
H: How to space stitching vias to avoid affecting the power plane? On a four-layer with the standard sig gnd pwr sig stackup, when flooding top and bottom layers with copper and stitching them to the ground plane, I know the standard rule is lambda/20 spacing for the vias. However, isn't there a risk of there being too many holes in the power plane and increasing its inductance/increasing the size of current loops? Is this typically negligible or is there a minimum spacing to follow? AI: power plane and increasing its inductance/increasing the size of current loops? Yeah, cutting the plane does that, but the effect would be minimal. Let's say you cut back a 1sq" section back 50%. The DCR would be 0.41mΩ for no holes (0.5oz copper) and 0.82Ω of DCR with holes. Most digital or RF appications won't care about that. For inductance for knocking 50% of the plane (lets say the plane has a 40mil spacing around the core) back you get 2.10 nH/in 1.15 nH/in, so unless you need the power plane to respond in the GHz range, it probably isn't going to make a difference with numbers. On top of that the inductance and DCR numbers won't be that bad because most of the holes that would be put in the board would only give a 10-30% reduction in copper cross sectional area. Current loops won't really happen with a via fence, the current wouldn't have that far to go for it to be a problem as the holes would only be in the 10's of mil range and that wouldn't add a lot of inductance by having to go around the via vs straight through. Probably in the 10-100pH's, and that only matters for very high frequency signals if you were using the plane as a reference, but you wouldn't because you'd have to create a hole for the via fence if you were trying to run a signal through it.
H: Which is the best method to measure AC voltage? I am trying to make a UPS circuit. I need to have a high speed measurement for the input AC voltage (220 Vrms) (in order to know whether to turn on/off the inverter). I had tried using this circuit: It works, but it is very slow: it takes more than 500 ms just to detect any change in the voltage, which need to works simultaneously with the input AC sine wave. Is there a better design than this (such as RMS to DC converters or voltage to frequency converter)? NOTES: Voltage needs to be measured for different processes too so using just optocouplers with some resistors won't be good (I don't need just a voltage detector). Input voltage is from isolated DC-DC transformer 24 VDC. "Sense GND" is the ground for this circuit. Low voltage trigger is approx 8.5 VDC (180 Vrms) and high voltage trigger is approx 12.4 VDC (260 Vrms). AI: The problem The task is to detect both over-voltage and under-voltage, and to signal this condition within, let's say, 2 mains cycles, about 40ms. A strategy To detect under-voltage, the easiest "discrete" solution I can think of is to detect the presence of voltage above the minimum 180V, using a comparator, and use that to trigger a resettable monostable (one-shot) multivibrator, with a pulse duration somewhere between 1 and 2 mains cycles, say 30ms. As long as the minimum threshold of 180V is reached, every 20ms, this monostable output will remain active, but a single "missing" cycle, or low-voltage cycle will permit the monostable to time-out. That means you're looking for a timed-out monostable state to indicate an under-voltage condition. Over-voltage detection can employ a similar technique. Here, though, you would trigger a one-shot pulse every time a comparator detects more than 260V. The monostable output pulse should be about 30ms long, so that it remains active for as long as incoming over-voltage pulses (every 20ms) keep re-triggering it. This time, a timed-out state will indicate NO over-voltage condition. Since we are now detecting instantaneous voltages, the switching thresholds change to \$260V_{RMS} \times \sqrt{2} = 360V\$ and \$180V_{RMS} \times \sqrt{2} = 250V\$ An implementation Something like this could work: simulate this circuit – Schematic created using CircuitLab You do not need diodes D3, D4, D5 or D6. They are there purely to make the simulation properly emulate the open-collector outputs of the '393 comparators. If you use push-pull output comparators, then you must include these diodes. You do need D1. This prevents comparator inputs being exposed to negative potentials, which could damage the comparators, and would definitely make them misbehave. CMP1 output goes LOW when over-volt is detected. This discharges C2. If no over-voltage condition is detected, C2 is left to charge, and potential \$V_A\$ reaches \$\frac{2}{3}\$ of the 24V supply after \$R_3 \times C_2 = 33ms\$. That brings CMP3 output high, signalling no over-voltage. CMP2 output goes HIGH when under-volt is detected, permitting C1 to charge, and \$V_B\$ to rise. As long as an input exceeding 250V potential arrives within \$R_{10} \times C_1 = 33ms\$, C1 is discharged. Otherwise, \$V_B\$ will reach \$\frac{2}{3}\$ of the supply, causing CMP4 output to go low, signalling under-voltage. CMP3 and CMP4 outputs are connected wire-AND. If either goes low (there's an under- or over-voltage condition), \$V_{OUT}=0V\$. Only when both CMP3 and CMP4 outputs are high does \$V_{OUT}=24V\$, signalling power is within acceptable bounds. Here's a simulation showing \$V_{OUT}\$ responding to a line input which goes in and out of bounds: Some improvements There are a few improvements that can be made. As it is, when the inputs of CMP3 and CMP4 are pulled high, they are outside of their permissible range, so while this won't harm them, they might do strange things. It would be better to take some limiting measures there. I'll leave it like this for the moment, though, because my brain is tired. I haven't shown them, but you shouldn't omit the usual power supply decoupling capacitors across the supply rails near the ICs. I've made no effort to mitigate noise immunity. There are a few things one can do to help with this, such as small capacitances to remove higher frequency components of the various signals. It would be prudent to introduce positive feedback around CMP1 and CMP2, for some hysteresis, which would suppress spurious transitions near the switching thresholds. Don't use a single resistor for R9. You should distribute the line voltage between several resistors in series, totalling close to 425kΩ, such as 6 × 68kΩ + 1 × 15kΩ.
H: Can't add text after automatically numbering nets on a bus (Eagle 7) In Eagle 7, when I draw a Bus line I can quickly assign a number of nets to the bus by using the [N..M] format; for instance, say i want to add data lines D from 0 through 8, I can Name the bus D[0..8]. However, this fails if i add text after the last bracket: if I want to create a bus of differential pairs and i Name LVDS_[0..3]_P the command will fail, reporting " Unexpected '_' ", meaning subsequent characters are not allowed. This is a very minor shortcoming, plus there are infinite ways of manipulating text outside of Eagle to automate text creation (Notepad++ is a favorite of mine), but i was wondering if anyone else came across this and perceived it as a limitation? Is there a way around it within Eagle? AI: You are correct, there is no in software workaround beyond ULPs, text editors or scripts. In the end I stopped using buses for things, and find it neater to just use labelled net stubs (short wire with a label on it). For naming things in bulk, I made an excel file which generates scripts of name commands. This works well if things are on a regular grid (e.g. data pins of a symbol all equally spaced as you can quickly generate coordinates with a step and offset. A similar approach could be done by making a ULP that takes x/y/step and some name format.
H: Not able to correlate the graph and table for the I2t rating in a fuse datasheet In the datasheet, the I2t rating of a 5A fuse is 0.055 at 10 times In. Using the same value to estimate the trip time for the 10 A condition, I get 0.0022 seconds. In the graph below, it comes around 0.02 seconds. Where am I going wrong? AI: The I2t rating actually represents the melting "energy" therefore it should have been I2Rt which comes from \$E = \int P \ dt\$. R here is the internal resistance of the fuse which assumed to be "constant" so it makes sense to specify I2t. But R doesn't have to be constant, although depends on a few factors such as environmental cooling. A fuse is basically a conductor and has a resistance which increases with temperature. When a current passes through it, it dissipates some power and this dissipation causes the temperature, therefore the resistance, to increase. For 10 times the nominal current i.e. 50 Amps, the melting rating is given as 0.055. $$ E_{50}=(I^2 t)_{50} \ R_{50} = 0.055 \ R_{50} $$ where \$R_{50}\$ is the fuse resistance for 50 Amps of current. According to the average time-current curve given in the question, t is about 0.02 seconds for 10 Amps. $$ E_{10}=(I^2 t)_{10} \ R_{10} = 2 \ R_{10} $$ where \$R_{10}\$ is the fuse resistance for 10 Amps of current. If we assume the melting energy ratings equal i.e. \$E_{50}=E_{10}\$ then we'll obtain $$ R_{50}\approx 36 \ R_{10} $$ which makes sense because the temperature rise (therefore the resistance change) can be higher at 50 Amps, compared to the one shown in Andy's answer which apparently has relatively lower (maybe negligible) resistance change.
H: How is the power of a component fixed? How can we say that a bulb is a 100 watt bulb or a heater is a 1000 watt heater when when according to the relations P=VI, V^2/R and I^2*R the power depends upon the current and voltage applied to the component while its resistance is fixed? What if I apply a much lower voltage to a 100 watt bulb? The resistance of the bulb is the same and much less current flows through it. Won't that make the power dissipation much less because of less current through and voltage across it? AI: The power of electrical loads is always at a specified voltage. Examples are 230 V (Europe), 120 V (North America), 12 V (automobile). What if I apply a much lesser voltage to a 100 watt bulb? The power demand will reduce and so will the light output. The resistance of the bulb is same and much less current flows through it. An incandescent light bulb is a poor example as the resistance varies with filament temperature. The cold (switch-on) current can be up to ten times that of the steady state current. That's why they often "pop" on switch-on. Won't that make the power dissipation much less because of less current through and voltage across it? Yes.
H: Step-up power supply for motor I know very little about electricity. I am a private detective. I am converting on older surveillance periscope to modern IP cameras. The periscope has several motors that raise/lower turn 360% and on the tilts the periscope mirror up and down. These were all wired to a 16 pin cord that also included communication. I am removing everything back to the motors. One motor is a (Pittman 9234S006-R1 Servo motor, 24VDC, 6151rpm no load, 5.17oz/in tor const, .16/8.11A ) The van is 12 volt dc with a 12 volt panel and 120 volt ac inverter. I want to add a 24 volt step up power supply for this motor. I obviously want 24 volt dc power but I don't understand the amps or watts. Can anyone explain this to me? Is more watts and amps better? AI: Based on your description I assume there are 4 motors, 23W each. Although it would be logical to get the power supply to cover the maximum power needed, it's often unnecessary. The real power draw of the motor depends on the load it experiences, in periscope, it probably won't be much. If I had to guess probably around 12W or less. If you plan on simultaneously using all of the motors take around 50-60W power supply, if only one motor at the time will be used then 20W will suffice. It's a good practice to take power supply about 20-30% stronger then the typical power draw. As to how does "amps and watts work?" The simplified version is that: Volts - How fast the motor will spin. Amps - How much force it will apply on the shaft. Watts - The power of the motor = product of force and speed (Volts x Amps). More amps and watts id generally better, just bare in mind that the bigger the disproportion in maximum delivered vs used power, the lower the efficiency of the converter A cheap option is to go with something like this, but there is risk that the output may vary from what is in datasheet. https://www.amazon.com/Converter-Regulator-Adapter-Vehicle-DC9-20V/dp/B01EFUHFW6/ref=sr_1_19?keywords=12v%2Bto%2B24v%2Bconverter&qid=1705505615&sr=8-19&th=1
H: What causes these spikes in an LTspice circuit analysis of a full adder? For the following full adder circuit: I get the following output for s (I have the same problem with COUT): The graph itself is correct, but those spikes should not be there. I know that I have to change the sine wave somehow but I can't figure it out. How do I avoid getting those spikes? AI: These spikes are called "glitches", which occur when signals propagate through combinatorial logic because the gates aren't infinitely fast. As a result, the circuit goes through intermediate states which might temporarily cause incorrect outputs. This is completely expected and not an error in your circuit or the simulation. The circuit just needs a tiny bit of time until it has calculated the correct result. Your circuit works perfectly fine. If you really want the spikes to be gone, you need to make the circuit synchronous by adding input and output registers, as well as a clock signal to drive them. This is likely overkill for such a simple experiment.
H: Ground rod in an area surrounded by trees I have some questions about ground rod installation. If a small part of the rod is kept outside the soil, wouldn't be that a low resistance point where a lightning can choose to go? Does it need to be covered with something to increase its resistance to the outside or there always must be a lightning rod nearby with a slower resistance? I'm asking because I'll put the rod in a remote place surrounded by trees, so a lightning over there would be a sure fire. Also not sure where to put a lightning rod in a place like that. AI: If a small part of the rod is kept outside the soil, wouldn't be that a low resistance point where a lightning can choose to go? If the ground rod were out in the middle of a field where it's the tallest thing around, then yeah, probably. If the ground rod were surrounded by trees, I would expect lightning to always hit the trees, not the rod. After all, if the rod is next to a living 20 foot tree, then the rod has 20 feet of air above it that the tree does not have, and it's much easier for the lightning to go through 20 feet of live tree than 20 feet of air. Does it need to be covered with something to increase its resistance to the outside or there always must be a lightning rod nearby with a slower resistance? I can't think of any reason why you would need either of those things. What's the hazard that you're worried about? I'm asking because I'll put the rod in a remote place surrounded by trees, so a lightning over there would be a sure fire. It's true that if a tree gets struck by lightning, that may cause a fire. What does that have to do with a ground rod? Are you concerned that the ground rod may somehow increase the chance that a tree would get struck by lightning? As far as I know, a ground rod would have no effect at all on the lightning risk.
H: Why are the lookup tables in FPGAs small? An FPGA can be seen (visually at least) as a matrix of cells. Each cell has a LUT (look-up table) inside, implemented with SRAM and MUX. Why does the size of such a LUT (and hence of the SRAM) need to be kept small in FPGAs (usually less than 10 input bits)? Please correct me if I am saying something wrong. AI: The physical size of a binary LUT is exponential in its number of inputs. In particular, every time you add another input, the size doubles. To go from 10 inputs to 20, the size of each LUT would go up by a factor of 1024, and you'd barely be able to fit any of them onto the FPGA anymore. You don't typically see LUTs with more than 6 inputs on FPGAs, as this already requires 64 bits of storage per LUT. It's much more efficient to compose circuits from multiple LUTs than to put everything into a single huge LUT. As a simple example, let's say that you want to build a 16-input AND gate on an FPGA. If the FPGA has 16-input LUTs, you can do it in a single LUT, which would contain 2^16 = 65536 SRAM cells. If, however, the FPGA only has 4-input LUTs, you'll need to build a LUT cascade. Each LUT, when configured as an AND gate, compresses 4 inputs down to 1 output. To get from our total of 16 inputs down to 1 output, we need two layers of compression. This requires 5 LUTs in total. Each 4-input LUT consists of 2^4 = 16 SRAM cells, so in total, our 16-input AND gate requires 5*16 = 80 SRAM cells in this case. By shrinking our LUT size from 16 inputs to 4, we were able to shrink the silicon area used by our circuit from 65536 SRAM cells down to just 80, even though we used more of those smaller LUTs. Additionally, FPGAs have mechanisms to combine multiple smaller LUTs into larger ones if a circuit really requires big LUTs (for example to store constant data). Modern Xilinx FPGAs have 5-input LUTs, but can internally connect multiple of them together to form LUTs with up to 8 inputs within a single logic slice (at least if I remember this correctly). Bigger ones can be created across multiple slices, too, by using the FPGA's routing fabric.
H: 433 MHz crystal 3 feet pinout? What are the pins A, B, C of this 433 MHz crystal oscillator? https://nexelectronics.in/wp-content/uploads/2023/01/433-MHZ-CRYSTAL-OSCILLATOR-1.jpg I suppose these are input, output, ground. Could you explain or give me a link to datasheet on internet? I researched, because I have just obtained this crystal and want to test on project, but could not find and figure out which pin serves for what. AI: The package is called "TO-39" or "TO39". I'm sure you'll find a lot of matching products if you google "R433 TO-39" or "R433 TO39". As for the pinout, A must be the ground connection (think of it like chassis ground), and B and C are the crystal element pins.
H: USB on ATmega32U4 powered at 3.3V I have an already-designed board that uses an ATmega32U4 (a board of our own design). It is USB-powered at 5V and it has an external 16MHz crystal. Below is a screencapture of the relevant portion of the schematic: The firmware is built as an Arduino project (a single .ino file), based on the Pro Micro. I have several reasons to move the design to 3.3V (still USB-powered, so I can simply add an LDO to generate the 3.3V); thus, the external crystal will have to be 8MHz. However, my doubt/question is about the 48MHz internally-generated USB clock signal: I don't see anywhere in the .ino file anything related to setting up the MCU's PLL; would that be part of the bootloader? Would I need to adjust the bootloader? Obtain and flash a bootloader image specifically for 8MHz clock? AI: I think the bootloader uses this module to initialize the USB hardware: USBCore.cpp This is the relevant code fragment: // ATmega32U4 #if defined(PINDIV) #if F_CPU == 16000000UL PLLCSR |= (1<<PINDIV); // Need 16 MHz xtal #elif F_CPU == 8000000UL PLLCSR &= ~(1<<PINDIV); // Need 8 MHz xtal #else #error "Clock rate of F_CPU not supported" #endif I think you must recompile the bootloader with the option F_CPU = 8000000 in the caterina Makefile or try to use the caterina-LilyPadUSB bootloader. That seems to be compatible to your design and expects an 8 MHz crystal.
H: Differential pairs / Single ended and the need for baluns Excuse any blatant mistakes, I'm very clearly learning this new. Anyway from a purely transmission line perspective, it seems you can have a single ended line and a differential pair. I see the single ended (like coax line), where the return path is the shield/ground; at first looking as if this is DC, it would mean I would read 0 potential difference between the shield and the ground of say the radio transmitter. But in the RF world I would assume for the field between the shield and center piece to propagate in coax or on a PCB, the line would sort of induce current on the return path/shield meaning at each point the shield wouldn't actually be 0 V (if you pronged into where the skin effect is) between it and say the radio transmitter but in sort of opposite to the center feed / trace's voltage. Now when I think of differential pairs, I don't actually see any difference in my mind. When I see images online of waveforms its pretty clear it is trying to get to the point that these opposing signals swing around a reference point but I don't see how that is any different. I noticed on the an RF transceiver (cant remember the name sorry) had its transmit outputs as N and P requiring a balun for coax, so I'm clearly missing something. My only thought is say the N would instantaneously output +4V (wrt ground), whilst the P outputs -4V (wrt ground), but again I don't see how this changes thing along the transmission line. Essentially I'd like to know the exact difference between the single end and differential pair, and if this is dependent of thinking in hf terms or simple dc cases. AI: A load is balanced if both terminals of the load have the same impedance to ground. A signal generator / transmitter / line driver is balanced likewise, that is, both terminals of the signal generator / transmitter / line driver have the same impedance to ground. A transmission line is balanced if it is the case that if the two conductors were to be driven in parallel with a single voltage signal, the current in each conductor would be the same. Examples would include twisted pair and twin-lead. A transmission line is unbalanced if it is the case that if the two conductors were to be driven in parallel with a single voltage signal, the currents in the two conductors would be different. An example would be a coaxial cable. simulate this circuit – Schematic created using CircuitLab I am using the definitions for balanced and unbalanced transmission lines used in imbalance difference modeling of mode conversion. Essentially I'd like to know the exact difference between the single end and differential pair I believe that what you are referring to as "single end" and "differential pair", are unbalanced and balanced transmission lines. The physical distinction between them is related to how current would be divided between the conductors if the conductors are driven in parallel. An center-fed dipole antenna is a balanced load. A coaxial cable is an unbalanced transmission line. When connecting them together, a balun is used. In normal operation a high frequencies, a coaxial cable has current on the outer surface of the inner conductor, and on the inner surface of the outer conductor. If a balun is not used to connect the coaxial cable to the balanced load, current will flow on the outer surface of the outer conductor. This will make the coaxial cable an unintended antenna. This will radiate power, but not necessarily in the desired direction or polarization of the dipole antenna. Similarly, the currents in arms of the dipole antenna will not be equal, and so the antenna will radiate less power. ...and if this is dependent of thinking in hf terms or simple dc cases. The effects of transmission line imbalance become much more pronounced at higher frequencies. However, twisted pair and coaxial cables have different noise properties at audio frequencies. Twisted pair is good for rejecting magnetically induced noise. Twisted pair will accept electric field induced noise but this will primarily be common mode noise. If differential signaling is used, this common mode noise can be attenuated. Coaxial cable is susceptible to noise from electric fields, but if the outer conductor is grounded, the noise will be shunted away. Thus coaxial cable works better with single-ended signaling. However, if 2 coaxial cables are used, differential signaling is obviously possible.
H: TO-220 vs TO-3 advantages/disadvantages Simple Question: What are the advantages and disadvantages of TO-220 package vs TO-3 package in different applications/power requirements? Is there a best route? AI: TO3 package is obsolete. It was introduced in 1956 along with the first germanium power transistor. To its credit, it survived to this day, which is quite the achievement. Pin spacing was chosen so it would fit in existing vacuum tube sockets. Its distinguishing features compared to other packages of the day were good heat transfer to a heat sink, while being hermetic and preventing water ingress. It was a huge upgrade compared to this: But compared to modern packages it's terrible. Even in high thermal conductivity materials like aluminium, in a sheet, heat diffusion works much better through the sheet (from one side to the other) than across (along the length of the sheet), because the former involves a much larger cross-section to conduct heat. If you mount it on a L-bracket... the first drawback is you need a L-bracket and a bunch of extra screws, hardware, tapped holes and thermal grease, which adds cost. And heat has to travel along the wrong direction of the bracket to reach the heat sink. (pic source) More modern packages like TO-247, TO-220, TO-3P are mounted directly on the heatsink, without the extra thermal resistance and cost of the L-bracket. Now if you put the TO-3 on the heat sink directly then heat still has to travel a long way in the wrong direction in the flat aluminium piece behind the transistor. Besides that, it has many other drawbacks: There is no third pin, it's the transistor package itself. If it is exposed at the rear of the heat sink that's a short circuit or electrocution risk so you need a plastic cap. Even if you put a plastic cap, the screws are still live. If you see a heat sink at the back of an enclosure with a bunch of TO3s without plastic caps, chances are good every one of them is at a different voltage and any contact with a conductive object between the TO3s and the heat sink will result in something blowing up. If it's a high voltage device, keep your fingers away. It's just... bad. You also need to connect this "pin" to your PCB via another cumbersome and fiddly part, like an eyelet washer or copper standoff with nuts and washers. Assembly is time consuming. In the old days wires were used, but that only works at low frequency due to inductance. This is a Marantz 250 amplifier from 1973. Notice the custom made metal enclosure to protect exposed TO3s from short circuits... Depending on how the "third pin" (the package) is connected to the PCB, different thermal expansion versus the other two pins can create stress that will pop the pads off the board. Thickness of the heat sink plate or L-bracket is limited by the length of the pins It takes a ton of space on the PCB which can't be used for components High inductance due to long traces and wide pin spacing, so it can't be used for high speed switching Forces the board to be parallel to the TO3 mounting surface, which is inconvenient (most heat sinks are vertical for better convection, most boards are horizontal) Can't separate board from heatsink without desoldering it, which makes maintenance problematic More expensive than plastic packages, more expensive and labor intensive mounting. etc. Overall, for high power, TO247 and TO3P are just better in pretty much every criteria. For low power, TO220 is much cheaper and easier to use. These packages have none of the drawbacks mentioned above. Some advantages: TO3 being all metal can withstand higher temperatures and be hermetically sealed. It's also bigger, so it is still used to fit some hybrid circuits.
H: Limit of applicability of "quasistatic"/capacitive picture of devices When analyzing the transitories associated with switching in semiconductor devices, in textbook treatments one often sees the time dependence considered by including (potentially nonlinear) capacitors in the device. My question here is about trying to understand what is tacit in this, and why we can do this rather than having to solve the full equation set associated with carrier transport in the semiconductor in order to, eventually, determine \$i(t)\$ (defined as the relevant instantaneous current in the given device). My suspicion is that this approximation works to the extent that the device "responds much more quickly" than the signal of interest driving the device changes. That is, if we are in a quasistatic limit such that at each point during the transition the semiconductor is in a new electrostatic situation, then we can use this picture of capacitors rather than going all the way back to the equations. Is this suspicion correct? The reason I am unsure is that, in the treatments I have seen of devices there seems to be the suggestion that we can use this capacitor picture up to very high frequencies (in terms of the frequency content of the signal driving the device). For example, let's consider a modern digital system with a gate which is such that it is driven by a signal with an edge rate of 10 ps (I don't even think is that aggressive). We thus have the voltages throughout this structure changing on the order of ps which I suspect is on the order of the relevant recombination times (by recombination time I mean whatever is the fastest mechanism for getting carriers to where they need to be -- this might be e.g. a transit time to a source of carriers) of carriers. Why then is this quasistatic picture still appropriate? As a specific example of my general discussion above: For pn junction, we consider the charge which must flow from the surrounding battery in order to satisfy the electrostatics at each instant. Thus, \$i_e(t) = dq(t)/dt\$ where \$q(t)\$ depends on \$v(t)\$ and this gives the contribution to the total current associated with the switching (the nonlinear capacitors in general) (it of course adds with what I’ll call \$i_o(t)\$ which is the pn junction current from the steady state case for that given value of \$v(t)\$: \$i(t) = i_e + i_o\$). This \$q(t)\$ is the aforementioned charge associated with the nonlinear capacitor used to model the device. But it seems to me that we are tacitly saying that the rate of change of the driving \$v(t)\$ must be slow enough so that at each instant the electrostatics are indeed fixed. In a sense, the limit of our analysis is that the process is quasistatic with respect to our pn junction, but I am struggling to understand with respect to what semiconductor system time constant \$\tau_i\$ the characteristic time of the driving voltage \$T\$ must be large. As I wrote above, I suspect it's whatever the fastest recombination time is. That is, I imagine that for small enough \$T\$ (so that this \$T\$ gets to be on the order of the shortest recombination time), we eventually have to go back to the fully time-dependent equations governing carrier transport etc. AI: The minority carrier lifetime in modern digital CMOS devices is actually much longer than the cycle time of the digital circuits. From The Minority Carrier Lifetime in Silicon Wafer, on Science Shot: The lifetime is quite unpredictable and difficult to control. It can vary by several orders of magnitude, from approximately 1 ns to 1 ms in common silicon solar cell materials. The highest value ever measured is 32ms, for undoped silicon, and the lowest is 1 ns, for heavily doped silicon. Given that CMOS devices can achieve gate propagation delay times on the order of 10 ps and lower, we can infer that carriers do not need to recombine for a MOSFET to stop conducting. Instead, it is sufficient for those carriers to be swept out of the device's active area by electrostatic forces. This process is only limited by the size of the channel and the velocity of the carriers. From Introduction to Power MOSFETs and Their Applications by National Semiconductor: A major advantage of the power MOSFET is its very fast switching speeds. The drain current is strictly proportional to gate voltage so that the theoretically perfect device could switch in 50 ps–200 ps, the time it takes the carriers to flow from source to drain. Note that this quote talks about power MOSFETs, which typically have much longer channels and lower carrier velocity than digital CMOS devices, hence the slow switching time of 50ps. The same applies to PN junctions. A switching diode, for example, will conduct for some time after the forward voltage across it has been removed, which is called reverse recovery. During this time, a reverse current flows through the diode. The duration of reverse recovery isn't fixed, though, but rather depends on how fast the external circuitry attached to the diode can remove the reverse recovery charge from the junction. The higher the reverse current, the faster the carriers making up that charge are swept out of the junction, speeding up the turn-off process. Similarly, if you want to turn a BJT off faster, you can short its base directly to ground (or otherwise inject a negative current into the base) to actively evacuate carriers from the base. As a rough estimate, let's consider a 20nm long N-MOSFET channel in velocity saturation with electrons going through it at approximately 10^5 m/s (number from Wikipedia). That means an electron travels through the channel in about 0.2 ps. Charging and discharging the MOSFET gate at a rate where this becomes significant will be quite a challenge. Additionally, even if you could change the potential on the gate fast enough for this carrier evacuation time to matter, you could still describe the process as a capacitor being charged and discharged by a constant current source due to velocity saturation. In short: You don't need to wait for carriers to recombine and disappear to turn a semiconductor device off - instead, you can shove the carriers somewhere else where they don't cause any trouble. The electrostatics within the semiconductor can be considered "fixed" because the signals are much faster than the natural recombination of carriers.
H: Ferrite Loopstick Antenna Grounding I've built a simple tuned frequency radio based a description in Ronald Quan's book. I have two questions about the tuning part of the circuit. The radio has a ferrite core antenna with two separate coils. The primary (most turns) is connected to a variable capacitor and the secondary coil is connected amplification part of the circuit. I understand that variable capacitor and primary coil form a resonant "tank" circuit and that the ferrite bar acts as a transformer as well as an antenna inducing a signal with the resonant frequency in secondary coil. My first question is why is the LC tank circuit grounded? I've seen this consistently in other AM circuit diagrams as well but I don't know reason for it. Wouldn't this resonant circuit work just as well if it was simply a variable cap connected to the primary coil with no other connections? My second question is what is C1, the fixed 1 microfarad capacitor, doing in the resonant circuit? Connected as it is in series with the much smaller variable capacitor it doesn't seem to have much effect on the resonant frequency. Quan seems to include these fixed capacitors in other his radio designs as well, see this this question for example, but I haven't seen them otherwise. The pictured circuit comes from Build Your Own Transistor Radios: A Hobbyist's Guide to High-Performance and Low-Powered Radio Circuits (2012) by Ronald Quan, p. 57. AI: My first question is why is the LC tank circuit grounded? You're mostly right in that grounding the LC tank isn't absolutely necessary, but you may find that grounding it helps for a couple of reasons... The shell of variable capacitor VC1 may be affected by hand capacitance (if not grounded). As you approach VC1, hand capacitance de-tunes the LC tuned circuit. The variable capacitor's rotating plates are most often the ones to ground. The two coils are separate but adjacent. You want the "cold" end of the LC tuned circuit adjacent to the small coil to have least RF voltage for high-Q. The far end of the larger coil goes to the "hot" (ungrounded) end of VC1....the fixed plates. My second question is what is C1, the fixed 1 microfarad capacitor, doing in the resonant circuit? This large capacitor is effectively AC GROUND as far as radio signals are concerned. It ensures that one end of the small coil has no signal, while the base end of the small coil sends all the signal power to the transistor base. Why not just ground it?...DC bias current for the transistor base flows through this small coil. Transistor base bias can't be zero, as it would be if grounded. In some more sophisticated AM radio, automatic gain control would be fed back to bias this transistor, changing its base bias current. In this simple radio, base bias is fixed constant by those two series diodes. You might substitute a single RED LED for these diodes, and use it as a pilot light. Understand that this is a schematic fragment, missing bias components for the missing transistor.
H: Assign statement in testbench doesn't seem to work as it should I have a small Verilog code example asked as an interview question. I am not sure why it prints "p=01" but not "00" since assign should update p as soon as the value of q changes. Or does it work differently in a testbench? module tb; reg [1:0] p,q; assign p=q; initial begin q=1; #100 q=0; $display("p = %b",p); end endmodule I am using Mentor Questa in edaplayground. Even if I make p a wire, the result is the same. AI: This interview question looks like it was taken directly from the IEEE Std 1800-2017, section 4.8 Race conditions Because the execution of expression evaluation and net update events may be intermingled, race conditions are possible The simulator is correct in displaying either a 1 or a 0. The assignment of 0 to q enables an update event for p. The simulator may either continue and execute the $display task or execute the update for p, followed by the $display task. In your case, a simulator may display 00 or 01. One way to avoid the race is to delay the $display: initial begin q=1; #100 q=0; #1; $display("p = %b",p); end In writing Verilog code, always strive to avoid race conditions. It does not matter if p is declared as a reg or a wire. Note that declaring as reg is only allowed in your code example if SystemVerilog features are enabled in your simulator.
H: Why does light doping imply a large temperature coefficient? In a discussion about how one can make integrated resistors in a given IC technology, Gray, Hurst, Lewis, and Meyer (Analysis and Design of Analog Integrated Circuits) remark that if we want to use a lightly-doped layer (e.g. a layer targeting the base region in a bipolar technology) then "because the material making up the resistor itself is relatively lightly doped, the resistance displays a relatively large variation with temperature." I am racking my brain for why low doping implies a large variation with temperature. My first instinct was that there was some allusion to ionization of the dopants as a function of temperature, but as far as I know this ionization fraction is independent of dopant density (indeed, one multiplies this fraction by the nominal number of dopants to get the number of ionized dopants), but perhaps that is wrong. At any rate, what are they alluding to? AI: With high doping levels, the sheer number of available carriers keeps the resistance low. With light doping, however, the relatively small number of carriers gives more resistance to the path. However, changes in temperature will affect the number of carriers by exciting some electrons into the conduction band (for N-type doping, P-type follows similar logic). This might not have much effect on the already generously doped areas which will hardly notice the addition of more carriers, but in a lightly doped material the number of carriers will differ significantly over temperature. Note that the number of additional carriers added by a temperature rise is fairly independent of doping, it's just more significant when it occurs in a material which has fewer carriers to begin with.
H: OrCAD - Splitting signals how can I split signals like in the picture below in OrCAD? I was trying to find it, but unsuccessfully. AI: Those are voltage ports and its essentially like drawing a wire between two points. If you need another point, just add one with the same name or attach another set with a different name to the same node.
H: Where is the body terminal of a FinFET connected usually in digital circuits and in analog circuits? What is the usual practice in industry regarding the body terminal of a FinFET? Is it connected to the source to eliminate body effect? Or is it connected either to the ground(nfet) or the supply(pfet)? Is this matter handled differently in digital circuits and analog circuits? AI: FinFET is just a specific structure of a mosfet. It's connected the same way as a normal planar mosfet. The only difference is that the channel is surrounded by the gate on 3 sides (not just one side as a planar fet), hence it can be smaller (this "multi-gate" layout mitigates short channel effects). You'd usually connect the body to the most negative or positive volage in the circuit (depends if it's an N channel or P channel), but this is not neccesrrily always the case (there are some very specific cases when you'd want to do otherwise). EDIT: in standard processes (unless you use a special isolated gate NMOS), for N channel, you have no choice but to connect it to ground (Vss) because in an IC, all N channel mosfets share the common P-doped substrate, meaning they are (nearly) shorted together.
H: I need the name of an electrical spring switch with a thin rod inside that completes a circuit upon impact I had a transparent ball that lit up when you bounced it. Inside it had a tall thin stiff spring, and inside the spring was a metal rod. When the ball bounced, the spring moved to contact the rod and completed the circuit. I could probably create something like it but would rather buy a commercial version. Googling has not helped me even find a matching picture. AI: Try "vibration sensor spring".
H: Confusion regarding solar cells I am very confused by these two videos. This video mentions at 1 minute onwards that a solar cell has a PN junction. This video mentions N type solar panels and P type solar panels. From that what I am able to understand, N type solar panels do not have a PN junction. It only has N type material while P type solar panels does not have a PN junction and it only has P type material. From the first video, I understood that each and every solar panel has a PN junction. AI: All photovoltaic solar panels are PN junctions. One side of the cell is a P type semiconductor while the other side is an N type semiconductor. The P type and N type solar panels referred to in the second video also use PN junction solar cells - both types have PN junctions. The difference between the two lies in which side of the panel faces the light source. In P type solar cells, the P semiconductor faces the light source. In N type solar cells, the N semiconductor faces the light source. Which side needs to face the light source depends on the semiconductor materials used in making the solar cells. From what I have read, P type panels are more common and are prone to degradation over time, but they are better for use in space. Much early development of solar cells was done for solar panels for space applications. N type panels are turning out to be more efficient and less prone to degrading in applications here on Earth.
H: Output voltage not changing with a constant current driver If I attach a voltmeter to the output of this constant current driver and lower or raise the input voltage (100-230V,) the output voltage remains constant at 56V. Should it not fluctuate between 40-58V? AI: This means this driver will output 350mA constant current, within an output voltage range of 40-58V. So you can use it for a series string of LEDs that has a total forward voltage of 40-58V at 350mA. For example 16 white LEDs in series. When there is no load (or just a multimeter) it will hit its max output voltage. If I ... lower or raise the input voltage (100-230V), the output voltage remains constant at 56V. The driver is supposed to keep its output constant in spite of input voltage variations. That avoids light flickering.
H: Why do manufacturers label differently seemingly identical components? I am not sure if this is the relevant place to ask this (I think it is.) I am still relatively new to the electronic design world, but I notice that a series of parts (example - Torex XCLxxx) can feature ICs whose specifications can be 90% identical. This is understandable, but I recently had to replace an IC which was discontinued. Apparently, a year or so later, the same manufacturer (not sure if I'm allowed to mention specifics,) releases a 'new' component, which conveniently can be a pin-to-pin perfect replacement for the discontinued one. After looking into the datasheet of this 'new' component and meticulously comparing it with the other one, the only differences I can find are an 0.04 mm increase in height and a slight difference of some internal resistances. Now it seems to me that it would be much easier to just redefine the old datasheet and continue selling the 'new' component under the old name without announcing its discontinuing (and save clients the hassle of searching for replacements.) Is this a common ocurrence? Does it have some sort of strategic benefit? AI: So there's differences. Thankfully, they decided not to use the same name for different things, that would be much more confusing than using different names for similar things. Strategic benefit is not annoying customers by shipping slightly different products under the same label, which they then can't tell apart. If you needed the part with the new resistances, would you want to buy the old part? If not, what would you do if you saw 100000 pieces of the right part number in stock somewhere? Assume it's newly stock? Old stock?
H: Capacitor polarity marker significance What I am not asking is what capacitor polarization means, or how to apply it in circuits. Is there any physical significance to whether a capacitor uses its positive or negative leads to mark its polarity? Prompted by Andy aka's comment on this answer. AI: The short answer is "no". Generally, electrolytic capacitors with non-solid (liquid) electrolytes (e.g. aluminium polymer) have their negative lead marked whilst ones with solid electrolytes have their positive lead marked (That's a difference, but can be a long-time tradition, I' don't know). But note that marking has nothing to do with having solid or liquid electrolyte i.e. it's not a strict requirement or a physical limitation to put the mark on the negative just because the electrolyte is liquid. Electrolytic capacitors with solid electrolytes (e.g. tantalum) are more sensitive to surges/spikes due to their electron conductivity (aluminium electrolytic capacitors generally withstand twice the rated voltage for a few seconds but tantalums don't, for example). It might make some sense to differentiate these with a marked positive terminal instead of negative.
H: Transfer function of a heating system I wanted to design a small temperature-regulated DC heater for the sake of educational purposes in the field of control theory. The idea is to heat up a resistive element through a buck converter of some sorts and to regulate that with a control loop involving a PID controller for the controller and a linearized thermistor for the feedback, like pictured in the following block diagram: I naturally wanted to first analytically find the transfer function of that heating system which incorporates both electrical and thermal equations, the schematic of the system I came up with looks like the following: Please note that for the sake of simplicity: I am oversimplifying the electrical side, "ControlledVoltage" should eventually be driven by a buck converter but as this isn't related to my current issue I'll assume that it is directly proportional to the output voltage of the PID controller ("Vpid"). For the thermal side I'm using the thermal-electrical analogy to model the thermal behavior of the heating system through an electrical circuit So, it should be easy to find the transfer function of the thermal side as it's nothing more than an equivalent RC circuit with the thermal flux ϕ as an input and the temperature of the heating system as an output. However, when attempting to find the transfer function of the electrical side with the controlled voltage as an input and the dissipated power in the resistor as an output, I obviously find: $$P = \frac{V^2}{R}$$ Which is a non-linear equation, which is also a huge problem since transfer functions are by definition linear time-invariant systems, meaning that I cannot use this equation to model my transfer function or that I need to find some way around it. So my question might appear obvious now, how can I find the transfer function of my system? I am obviously not the first person in the world to attempt to design a regulated DC heater and as such probably also not the first one to stumble across this issue, is there some tricks to linearize the system, some equations I missed, something I simply didn't understand about transfer functions, or maybe an entirely different design philosophy I missed? Please note that I am aware that I could build the system, find the impulse response experimentally, curve-fit it and eventually find the transfer function. But again, I'm doing this little project for educational purposes and I would greatly appreciate finding the transfer function analytically. Any help would be greatly appreciated. AI: There are several ways to handle the non-linearity implied by the power being proportional to V2. Linearise around an operating point. For small disturbances, power will be roughly proportional to voltage. However, this solves the problem only in the region of that particular point. This is a well used approach, often used to get a gm figure for bipolar transistors at a particular bias point. However, you still need to make sure the system is stable at all heating levels, you will be powering on with the system cold, see the next point. Make the system stable at a range of gains. If you want the system to operate with the different gains you get from operating at high or low voltage, then you need to be very generous with your gain and phase margins, and tolerant of slower than ideal settling times over a range of settings. This can be very easy to do if you use Bode plots for design, you simply move the low-pass elements up, and the high-pass ones down in frequency, to give you an extended 6dB slope region in the middle. Aim for over-damping. With a heating system, depending on the tolerance of your load to an over-heat (for instance biological samples or soft plastics), you must have no possibility of an overshoot. As you have a heater, lose the buck converter, use simpler PWM, and now you have a linear transfer function between PWM setting and heater output power (power = full_power * PWM_fraction). Take advantage of the fact that a heater time constant will be seconds to 10s of seconds, so the PWM can be very slow. My oven controller uses a 20 seconds PWM period. I had a precision lab hotplate that managed quite happily with a two seconds period.
H: What actually causes the need for a balun on a dipole fed to coax My current understanding: The line in the center of the coax and the inner side of the shield are the current paths, as in the signal/field propagates between these two, (i can't therefore imagine it matters if the outer shield connection is grounded or whatever since I would assume its sort of a faraday cage assuming the skin depth is not significant). Anyway for this I would strongly guess the currents must be the same and opposing in the shield and center piece so no fields escape, image below shows my understanding... Now when attaching a dipole, if theoretically it was perfect, I would assume the voltage is just 180 out of phase of the other pole; now connecting one pole to the shield -in my eyes- doesn't cause any problems since its just equally opposed by the center fed pole in a way it can pass in the coax. I've looked online and despite many different answers, one said its the difference in the currents that end up flowing on the outer side of the shield, I guess this is only caused by imperfect dipole design or where there is a difference in a multitude of things that mean the two are not equally opposite, now if this is true I see a choke balun can be used to create a high impedance on the outer shield... My question is if the above is true where does that reflected current on the shield now go? I can't see it been shoved in the inner shield because again there would be different current in the center and shield. AI: My question is if the above is true where does that reflected current on the shield now go? The outer surface of the outer conductor acts like an antenna. You will notice that in an antenna, for example a dipole, the current is not the same everywhere. This seemingly violates Kirchhoff's Current Law, which might be reworded to state that the current into a volume is the same the current out of that volume. This conundrum was solved by Maxwell, who introduced the concept of displacement current. The current we are first made aware of when we learn electronics is conduction current -- i.e. the movement of charges. Maxwell realized that this current was not the whole story, and there was something else, that behaved like conduction current (for example it causes magnetic fields) but was not conduction current. He called this displacement current (although he used the same name for a slightly different concept earlier). The displacement current density at a point is equal to the rate of change in the electric field at that point. (May seem odd, but it solves what are otherwise problems). In a (for example, dipole) antenna, charges flow down a conductor, but because the conductor does not form a complete circuit, the charges accumulate as they approach the end of the conductor. This accumulation of charges causes a change in electric field. The conduction current decreases from the center of a dipole antenna to the end, but at the same time, the displacement current increases. When the two types of current are added together, Kirchhoff's Current Law (KCL) is preserved. The algebraic sum of all the current (both kinds) flowing into a fixed volume is 0. The same thing happens in the antenna formed by the outer surface of the outer conductor of a coaxial cable that connects a transmitter to a dipole without a balun. At the transmitter end of the coax, the conduction current on the inner conductor is equal but opposite to the conduction current on the outer conductor. However, at the connection to the antenna, the conduction current in the outer conductor has a different magnitude from conduction current in the inner conductor. Where did that current go? The answer is that electrons accumulate and then rarify on the surface of the outer conductor. This constitutes displacement current. The conduction current on the outer surface of outer conductor "stalls" and becomes displacement current, exactly as the conduction current in a dipole antenna "stalls" and becomes displacement current. Knowing of the existence of displacement current is essential to an understanding of antennas in a way that does not conflict with other known laws of electricity. I guess this [difference in currents] is only caused by imperfect dipole design or where there is a difference in a multitude of things that mean the two are not equally opposite No. The problem of currents on the outer surface of the outer conductor of a coaxial cable is not (necessarily) the result of imperfections in the balance of the dipole antenna. It is the result of connecting an unbalanced transmission line (the coaxial cable) to the (otherwise) balanced load. The outer surface of the outer conductor of the coax acts like an independent conductor. It is as if one hung a wire physically parallel to the coax, but electrically parallel to one of the arms of the antenna. simulate this circuit – Schematic created using CircuitLab In the diagram, since it is free to do so, the current splits between the right-hand arm of the antenna, and the outer surface of the outer conductor of the coax. Instead of a simple dipole, you essentially have an antenna with three arms, connected in an asymmetric fashion.
H: How does this SystemVerilog compiler directive work? The text I am reading (Stuart Sutherland's text on SystemVerilog for Simulation and Synthesis) gives the following snippet which apparently should be used in order to avoid including the same package multiple times in the same compilation: I am a bit confused about the preprocessor statement: `define DEFINITIONS_PKG_ALREADY_COMPILED as, from other languages, I was expecting to see something more like: `define DEFINITIONS_PKG_ALREADY_COMPILED 1 in order to actually "set" the flag. Is this an error in the text or just a shortcut based on the preprocessing inferring a true flag here? AI: It is not an error in the text. SystemVerilog allows you to simply use a macro name without setting it to a value. The macro will not have a specific value; you should just consider it as being "true" when checking it with conditional macros like `ifdef. Consider the following code: module tb; `define FOO `ifdef FOO initial $display("FOO is defined"); `else initial $display("FOO is not defined"); `endif endmodule If you run it as-is, you get the output: FOO is defined If you delete the `define line, you get the output: FOO is not defined Refer to IEEE Std 1800-2017, section 22.6 `ifdef, `else, `elsif, `endif , `ifndef for other code examples. Unfortunately, the Std does not explicitly state that the macro_text can be omitted (aside from the code examples).
H: Dual MOSFET short circuit I have a PCB as shown in the schematic. Looking at only one of the dual MOSFET ICs, I apply 3.3V to MOSFET2 and MOSFET1, which should turn off MOSFET1 (because P-channel) and turn on MOSFET2 (because N-channel.) What happens is that both MOSFETs are on, even though the P-channel one should be off, so there is a short circuit. My power supply starts limiting the current and the IC gets hot. This shouldn't happen in my opinion. The gate signal is from STM32 GPIO pins. I don't really understand the Vgs voltage in the P-channel part of the datasheet. If I had -3.0V Vgs and the source is at 38V, I would need 35V at the gate, which defeats the purpose of driving high currents & voltages with low voltages at the gate. Please help me understand why the P-channel MOSFET is not going off. AI: The SI4559ADY is a dual MOSFET, right? If you have 38V at the source of the P-MOSFET and 3.3V at the gate of the P-MOSFET, then it conducts and you are far above the allowed Vgs of +/- 20V, right? The gate should have about 28V, if you want to conduct and 38V (or up to 58V,) if you want it to block. The N-MOSFET is conducting a little bit. but 10V at its gate would be better. I think you need a proper MOSFET driver, like the MIC4604. Or something like this: (u need to take care, that LOWsideOFF goes high clearly before HighSideON goes high... and vice versa... or you will have something called "shoot through"... once I wondered, which idiot bought candles... then I realized, that my FETs were burning the flame "retardant" PCB... lol)
H: Voltage detected on output SSR while in "OPEN" state I have an SSR relay ( FOTEK SSR-25 DA ), it has load terminals marked (1) and (2). In normal "open" position there's no connectivity between 1 and 2 - no current whatsoever. The moment I apply 120VAC to the terminal 1 - I can immediately see that there's 120VAC on terminal 2, in the "open" position ( still no connectivity - I test with a multimeter ). I reviewed a few manuals like https://www.ia.omron.com/data_pdf/guide/18/ssr_tg_e_9_2.pdf - yet I don't see how "2" can possibly get any voltage there from "1". Is that relay defective or am I missing something? AI: Try reading measuring it with a load connected and see what you get. SSR's don't isolate (disconnect) the output, rather, they are in "high-impedance" state when they are off. Your DMM has such a high input impedance that it will be able to measure the potential from the other load terminal. This is all in contrast to an electromechanical "clicky" relay where off truly means disconnected or "open".
H: Interrupted SSI clock sequence I measured the following clock and data sequence (time axis is in microseconds) between a Beckhoff KL5001 encoder module and a 19-bit SSI encoder. Apparently the clock is interrupted 3 times before stopping finally. I was not able to find more detailed description of such clock interruptions than e.g. this one: "A running transmission can be interrupted at any time by just stopping the clock." It is not clear for me whether the data transmission should continue or start over. Looking at the data, the signal does not seem to start over. Any thoughts? AI: The host could be a microcontroller with SPI interface that works with 8-bit bytes, and therefore the transmission may not be continuous. Some 8-bit AVR MCUs had an SPI interface which basically needed one bit time between bytes so it was impossible to transfer with continuous clock on that MCU. The gap between clocks is not long enough to trigger a timeout so the transmission continues after the pause. The gap needs to be about 20 microseconds for the timeout and these gaps are far below it.
H: How to extract voltage and current from this real-life circuit (with resistors) I recently measured some voltages across two resistors as you can see in the images below. I used a triangle wave as voltage source and measured the voltages across the resistors with an oscilloscope. The top one is the theoretical circuit and below the one I built as an experiment. How do I get the proper voltage across and current flowing through my measuring resistor RM that is about 1 MΩ in the experiment? In theory CH1 shows the voltage across RM and by measuring the voltage across my testing resistor RT on CH2 I can get the current via I = U/RT. Is this correct? But when calculating the resistance of RM via RM = U/I, I don't get the indicated value. AI: First, take note of the oscilloscope's input impedance. It will be important later. Figure 1. The 'scope's input impedance is 1 MΩ. Now let's look at your circuit. simulate this circuit – Schematic created using CircuitLab Figure 2. The equivalent circuit showing loading by the oscilloscope probes. The CH2 input won't cause any loading because the output impedance of the signal generator is very low in comparison and, as a result, the voltage won't be pulled down. The CH1 input is in parallel with RM and since they are both 1 MΩ the effective resistance at that point in the circuit is 500 kΩ. Therefore you can expect the 'scope CH1 to read a little less than half of the CH2 reading. We can make analysis a little easier by using a 1 V DC source. simulate this circuit Figure 3. Voltage measurement with a 1 GΩ voltmeter. Note the voltmeters have negligible effect on the circuit so the voltages measured are those that would exist if the meters were removed. Note that the voltage at the junction of the resistors is about 2/3 of the supply, as expected. simulate this circuit Figure 4. Voltage measurement with oscilloscope connected. Here we can see that CH2 doesn't load the voltage source. It's reading 1 V. Meanwhile the resistance at the lower part of the resistor divider has been halved by the addition of the 1 MΩ scope input in parallel with RM. Now we've got 510 kΩ on top and 500 kΩ on the bottom so the voltmeter reads a little less than half of the supply voltage.
H: Scale and shift small positive analog voltage range for ADC input I want to measure a small positive analog voltage, 3v +/- 200mv with the ADC of a microcontroller. The ADC measures voltages between 0 and 3.3V, so ideally I would like the voltage I am measuring to be mapped to that range. I was able to find this post, which basically contains the same question as mine, but the math in the accepted answer is badly formatted and I was not able to deduce what to calculate to work out the resistor values I need: How do I scale and offset an input to an ADC? So my question is basically: How do I calculate the R1, R2 and R3 values in the linked post? AI: In this answer, bold text is used to indicate that a given section is general. The sections of bold text can be read independently of any particular concrete problem. Text which is not bold is specific to the problem given in the original question. There are a number of circuits that are used to level shift and scale a signal. The specific circuit topologies that may provide solutions depend upon the specifics of the problem. This answer assumes that the magnitude of amplification required is greater than 1, and the amplification is positive. Thus, in all cases, the input signal is connected directly or indirectly to the non-inverting input of an op-amp. It is also assumed that the level shifting is negative. To find a topology that might provide an appropriate solution: First find $$\mathbf{A = \frac{\Delta V_{out}}{\Delta V_{in}}}$$ in your case $$A = \frac{3.3-0}{3.2-2.8} = 8.25$$ Next find the value \$\mathbf{V_0}\$ of \$\mathbf{V_{in}}\$ that gives an output \$\mathbf{V_{out}=0}\$. $$\mathbf{V_0 = V_{in}|_{V_{out}=0}}$$ In your case, we are told that an input of 2.8V gives an output of 0. $$V_0 = 2.8$$ We can now write the equation relating \$\mathbf{V_{in}}\$ to \$\mathbf{V_{out}}\$ as $$\mathbf{V_{out} = A(V_{in}-V_0)}$$ in your case, that is $$V_{out} = 8.5(V_{in}-2.8)$$ The circuit topology we use will depend upon what reference voltages we are able to use. To choose which topology, we will define another voltage \$\mathbf{V_x}\$. \$\mathbf{V_x}\$ is the input voltage at which the output voltage is equal to the input voltage. That is \$\mathbf{V_x}\$ satisfies this equation $$\mathbf{V_x = A(V_x - V_0)}$$ or $$\mathbf{V_x = \frac{A}{A-1}V_0}$$ in your case, $$V_x = \frac{8.25}{7.25}2.8 = 3.1862 \text{V}$$ \$\mathbf{V_x}\$ is available as a reference voltage The simplest topology occurs when \$\mathbf{V_x}\$ is available as a reference voltage The ratio between R1 and R2 must be \$\mathbf{A-1}\$. $$\mathbf{R1:R2 = (A-1):1}$$ In your case, the ratio of R1 and R2 must be $$R1:R2 = (A-1):1 = 7.25:1$$ Now if you happened to have a 3.1862 voltage reference, and two resistors with the ratio 7.25 between them, you would be done. Here is the circuit. simulate this circuit – Schematic created using CircuitLab Unfortunately, there are no resistors in the E48 series that have a ratio of 7.25. You could however, use resistors in parallel or series to create such a ratio. But more importantly, you probably don't have a 3.1862 V voltage reference. So, we will look at some other solutions. A voltage greater than \$\mathbf{V_x}\$ is available as a reference voltage If a voltage greater than \$\mathbf{V_x}\$ is available as a reference voltage, we can use the following topology. With this new topology, we have the following relationships. To get the proper gain, the resistors must have the ratio $$\mathbf{\frac{R1}{R2 || R3} = A-1 }$$ and to get the proper voltage shift, $$\mathbf{\frac{R2}{R1 || R3} = \frac{V_{ref}}{V_0} - 1}$$ Define $$\mathbf{B = \frac{V_{ref}}{V_0} - 1}$$ Using a little algebra, we discover that we must have $$\mathbf{\frac{R1}{R2} = \frac{A}{B+1}}$$ and $$\mathbf{\frac{R1}{R3} = \frac{(A-1)B-1}{B+1}}$$ which gives the following ratio equation: $$\mathbf{\text{R1:R2:R3} = \frac{1}{B+1}:\frac{1}{A}:\frac{1}{(A-1)B-1}}$$ Let's assume we have an accurate 3.3V reference voltage that we can use. (Remember that \$V_{ref}\$ must be greater than \$V_x\$ or 3.1862 V in your case to use this topology. Then $$B = \frac{3.3}{2.8}-1 = \frac{5}{28} = 0.1786$$ $$(A-1)B = \frac{2.9}{.4}\frac{5}{28} = \frac{145}{112}$$ $$(A-1)B-1 = \frac{33}{112}$$ $$\text{R1:R2:R3} = \frac{28}{33}:\frac{4}{33}:\frac{112}{33} = 7:1:28$$ It is more or less serendipity that there are nice standard values that match these ratios. If we plug values in these ratios into our circuit, we get the completed circuit. simulate this circuit A voltage less than \$\mathbf{V_x}\$ is available as a reference voltage If a voltage less than \$\mathbf{V_x}\$ is available as a reference voltage, we can use the following topology. In this case, the resistor ratios are given by $$\mathbf{R3:R4 = V_{ref}:AV_0}$$ $$\mathbf{R1:R2 = (k-A):A}$$ where $$\mathbf{k = 1 + \frac{R4}{R3}}$$ Some ADC's provide a reference voltage at their midrange voltage. For a 3.3 V ADC, 1.65 volts may be available as a reference. We will choose that value to work an example. The resistor ratios are calculated as follows: $$R3:R4 = 1.65:(8.25\times 2.8) = 1.65:23.1 = 1:14$$ $$k = 14+1 = 15$$ $$R1:R2 = (15-8.25):8.25 = 6.75:8.25 = 27:33$$ Note that in the previous topologies, \$\mathbf{V_{in}}\$ was connected directly to the non-inverting input of the op-amp. This made the input impedance of the circuit very high. However, in this topology, the \$\mathbf{V_{in}}\$ is connected to the op-amp through a voltage divider. This brings the input impedance to approximately R1 || R2. Loading effects on the source may become an issue. There are at least two ways of dealing with the lowered input impedance of this topology. One solution is to use an extra op-amp as a voltage follower / unity gain buffer. Alternatively, one could simply use very high resistances for R1 and R2. This may lead to some inaccuracies, especially if the op-amp's input bias current is not very small. In this example, we will use high valued resistors for R1 and R2 to mitigate loading issues, knowing that the best solution depends upon circumstances. Here is a completed circuit. simulate this circuit
H: What happens when I drive a signal into the center tap of a center tapped transformer? I am trying to drive both a high-side and a low-side MOSFET as part of an H-Bridge circuit and had the idea to gang both MOSFETs with the same input. My idea was if I drive one side of a center-tapped transformer the other side being magnetically coupled to the first half of the primary would act like a step-up transformer and the other side would be in phase and have a magnitude of 2x the input single. the signal represents a pulsed DC square wave generated by a digital pin of a microcontroller. Are center-tapped transformers supposed to work like this? Is this a valid method to drive a MOSFET H-Bridge? am I risking sending too much voltage into the gate pins? is this even a good idea? AI: Center tapped transformers (or any transformers (or any components)) don't care about single pins. What matters is the voltage across each coil (which has two pins) and the current through each coil. A center-tapped coil is the same as two coils connected together in the middle. The way you're using the center-tapped coil makes an autotransformer configuration. It saves wire compared to a two-coil transformer, but doesn't provide isolation. Image from Wikipedia: It works just how you'd expect a transformer to work - the voltage ratio is the turns ratio. At least, it does for sine-wave AC. Passing a square wave through a transformer distorts the wave, but you probably already know that and don't care. You're using the transformer for two transformations at once - the left side as an autotransformer and the right side as a normal transformer. If this works for you, then great. You found a trick to do something with less parts. Congratulations.
H: Does DC-DC 24V-12V waste less power than AC-DC 220V-12V? I am modifying an existing UPS to allow for a longer backup time. I plan on adding a cooling fan along with an external current controlled power supply to charge the larger batteries. There are some appliances in the my setup that run on 12v (monitor, desk light, ONU etc). I planned on using these buck converters to have five or six 12V 2A ports. As I would be connecting these directly to the 24V battery, I was also planning on adding an over discharge protection for this, as well as an auto-switch circuit so that it switches to another SMPS when mains is available. Now I am wondering if this will be worth it. Using the wall adapter the 12V appliances came with on the AC output of the UPS would keep things much simpler, but I am worried if there would be a lot of power loss in switching. My goal is to maximize battery life as much as possible. I am using two 12V 45Ah batteries in series. The maximum consumption of my current setup is 620W (going by ratings on the labels of devices.) The maximum load capacity of my UPS is 720W. It had two smaller 12v batteries in series. Should I add the DC-DC ports or just use the AC output of the UPS? AI: From a 24V to 12V DC-DC you can expect 85-90% efficiency if it's not synchronous, 90-95% if it is. Going through the inverter, you can expect 80-85% efficiency if it uses a cheap push pull converter for the 12V to high voltage conversion, and more like 90% if it's resonant, but for a cheap 12V inverter it probably isn't. That's at high load ; at low load efficiency will be lower due to idle losses. Then another 80-85% for the cheap flyback AC-DC converter in the wall wart. So efficiency will be much higher with DC-DC converters, but since a significant part of the losses will be idle losses from the inverter, and these don't depend much on load, adding a few DC-DC converters is unlikely to help much. LM2596 has terrible efficiency because it uses a NPN switch instead of a MOSFET. It's an obsolete chip. The fake LM2596 modules you linked in the question are even worse, they do not honor the specs, the inductor saturates, and the high-ESR caps will die quickly. If you want DC-DC converters, then you should get ones from good manufacturers. They are not expensive. There is another issue: isolation. Using the original AC-DC wall warts (or isolated DC-DC converters) will isolate all your devices from each other. Using non-isolated DC-DC converters will not. So if there is any chance of ground loop issues, for example if this involves audio or analog of any kind, if you use non-isolated converters it's pretty much guaranteed you'll get noise problems.
H: Reduce component count for BCD input error detection I made this circuit that reads the input of the DIP switches and detects if the input is an error or not a BCD value. The problem is I have 4 inputs for my circuit. The LED indicate if the input is correct or wrong; if correct then allow input to pass, if not, output zero. Im having trouble implementing this to the breadboard as it requires 2 breadboard just to make this and the expenses might not cater for it. I am required to only use 74ls ic's How can I reduce the component count of the overall circuit? AI: simulate this circuit – Schematic created using CircuitLab
H: Can you convert a radio receiver frequency from 315 to 433 MHz by changing the antenna? I have an Audi Tyre Pressure Monitoring System (TPMS) module from the USA, which therefore means it operates on 315 MHz. It was not available in Europe and hence there isn't a 433 MHz version available. However, interestingly there is a very similar Hyundai module that operates at 433 MHz. I don't have one at hand and I also doubt it would be compatible with my car due to CANBUS protocols etc. What I am wondering, and might be completely wrong, is if it is possible to change the frequency by swapping the antenna? See attached the Audi 315 MHz module and Hyundai 433 MHz module. The angles are different but you can see that the Hyundai antenna is slightly shorter by 30 mm or so. AI: No, the antenna is just the means to get from wired to wireless. Different frequencies calls for different antenna designs but it does not set the frequency in this case. The frequency is mainly controlled by one of the ICs on the board (ignoring tuning networks, amplifiers and similar).
H: Error Indicator Light on Heater I have a heater with a neon bulb wired in parallel with a safety cutoff switch. The full current of the heater goes through the safety switch. If the switch trips, wouldn’t the full current just go through the bulb, very briefly, and burn the bulb out? AI: No, because the bulb has a much higher resistance than the heater, limiting the current to a few mA.
H: STM32 and Elegoo supply voltage I wrote code using an STM32F407 to read the voltage of a potentiometer on the input pin of the ADC. This is the circuit: I tried taking the power directly from the STM32 and then, as in the photo above, I tried taking the power from the Elegoo board. In the case of the STM32 I can read the correct values, but in the case of the Elegoo I get this: Why it doesn't work with the Elegoo's Vcc? Does the Elegoo have some problems with the power pins? It is definitely a hardware problem, because the code is the same, but in the case of the STM32 it works. AI: There was no common ground potential between boards. You can define any point as 0V or ground level to which everything else is referenced. However, as an analogy, if you live in a valley and your friend lives on top of a hill, you both may be same height, but your heads will be at different level compared to sea level which is absolute, so you and your friend share a different ground level reference in absolute numbers. So a board cannot measure or work with anything that is not referenced to ground of that board. Same thing if you have two 9V batteries. They both have 9V over their terminals and you can name the negative terminals as 0V or ground or whatever, but they are completely unrelated to each other, until you connect a wire between them. If you try to measure anything with a multimeter between the two otherwise unconnected batteries, you will get nothing, no voltage and no current, no resistance, nothing. So in order to measure the voltage output of a potentiometer, the potentiometer and the measuring board must have the same ground, i.e. 0V reference, and the supply going into the potentiometer must be referenced to board and potentiometer ground.
H: Verilog code for three-storey building Ground, first and second floors and the inputs are r0 for ground, r1 for first floor, and r2 for second floor. The output will be d1, d2, up1, up2, and n (no action). Frequency is 215 Hz. State diagram (Moore design) for a lift (elevator): This is the code I tried: module lift(input clk_in,rst,input [1:0]r,output reg [2:0]out, output reg clk_out); parameter g0 = 4'b0000; //ground floor with no action parameter g1 = 4'b0001; //ground floor with d1 output parameter g2 = 4'b0010; //ground floor with d2 output parameter f0 = 4'b0011; //first floor parameter f1 = 4'b0100; //first floor with up1 parameter f2 = 4'b0101; //first floor with d1 parameter s0 = 4'b0110; //second floor parameter s1 = 4'b0111; //second floor with up1 parameter s2 = 4'b1000; //second floor with up2 //outputs parameter n = 3'b000; parameter d1 = 3'b001; parameter d2 = 3'b010; parameter up1 = 3'b011; parameter up2 = 3'b100; //inputs parameter r0 = 2'b00; parameter r1 = 2'b01; parameter r2 = 2'b10; //current and next state registers reg [3:0] state, nxt_state; //clockDivider reg[15:0]counter=16'd0; parameter DIVISOR = 16'd32768; always @(posedge clk_in) begin counter<=counter+16'd1; if(counter>=(DIVISOR-1)) begin counter<=16'd0; end clk_out<=(counter<DIVISOR/2)?1'b1:1'b0; end always@(posedge clk_out,posedge rst) begin if(rst == 1) state <= g0; else state <= nxt_state; end always@(state ,r) begin case(state) g0 : begin if(r == r0) nxt_state <= g0; else if(r==r1) nxt_state <= f1; else nxt_state <= s2; end g1 : begin if(r == r0) nxt_state <= g0; else if(r==r1) nxt_state <= f1; else nxt_state <= s2; end g2 : begin if(r == r0) nxt_state <= g0; else if(r==r1) nxt_state <= f1; else nxt_state <= s2; end f0 : begin if(r == r1) nxt_state <= f0; else if(r == r0) nxt_state <= g1; else nxt_state <= s1; end f1 : begin if(r == r1) nxt_state <= f0; else if(r == r0) nxt_state <= g1; else nxt_state <= s1; end f2 : begin if(r == r1) nxt_state <= f0; else if(r == r0) nxt_state <= g1; else nxt_state <= s1; end s0 : begin if(r == r2) nxt_state <= s0; else if(r == r0) nxt_state <= g2; else nxt_state <= f2; end s1 : begin if(r == r2) nxt_state <= s0; else if(r == r0) nxt_state <= g2; else nxt_state <= f2; end s2 : begin if(r == r2) nxt_state <= s0; else if(r == r0) nxt_state <= g2; else nxt_state <= f2; end default : nxt_state <= g0; endcase end always@(state) begin case(state) g0 : out <= n; g1 : out <= d1; g2 : out <= d2; f0 : out <= n; f1 : out <= up1; f2 : out <= d1; s0 : out <= n; s1 : out <= up1; s2 : out <= up2; endcase end endmodule Testbench: module tb(); reg clk_in; reg rst; reg [1:0] r; wire [2:0] out; wire clk_out; lift uut (clk_in, rst, r, out, clk_out); initial begin clk_in = 0; forever #15259 clk_in = ~clk_in; end initial begin // Initialize Inputs r = 2'b00; rst = 1; // Wait 100 us for global reset to finish #91554; rst = 0; #61036; r = 2'b00; #30518; r = 2'b01; #30518; r = 2'b10; #30518; r = 2'b00; #30518; r = 2'b10; #30518; r = 2'b01; $stop; end endmodule The input is not changing according to the divided clk_out: AI: You currently change r on every clk_in positive edge. If you want to change r every time clk_out changes, you can wait for an event on clk_out (like the negative edge). I modified the testbench initial block to replace most of the # delays to demonstrate: initial begin // Initialize Inputs r = 2'b00; rst = 1; // Wait 100 us for global reset to finish #91554; rst = 0; @(negedge clk_out); r = 2'b00; @(negedge clk_out); r = 2'b01; @(negedge clk_out); r = 2'b10; @(negedge clk_out); r = 2'b00; @(negedge clk_out); r = 2'b10; @(negedge clk_out); r = 2'b01; $finish; end
H: Struggle with gated sound - Fuzz stompbox I built my first fuzz pedal based on the Tone Reaper but I can't handle with gated sound. Whatever the fuzz pot is set the sound is gated. At low position the signal is very weak and chopped, even with no compressor in the signal chain. I verified all components and replaced the 2N2369 but nothing improved. I guess there is a Bias problem. On Q9 the voltage on base is 0.2V, Emitter 0V and collector 8.36V so the base is not so high to work properly. In another similar built one AC176 is used and rest of the board is still the same. I don't know why. AI: An AC176 is a germanium transistor, while the 2N2369 is silicon. The base-emitter voltage of germanium transistors is around 0.3 V, silicon 0.6 V. Germaniums also have more leakage currents. Substituting one for the other is going to require different biasing. Even if you increase the bias on Q9 it’s not going to sound the same. This is why pedal builders seek out and pay top dollar for germanium transistors. Update: According to the parts list in the documentation found here the second transistor should be an AC176. The schematic shows a 2N2369, I don't know if that's a mistake or intentional as a weak attempt to thwart copying. The schematic in the link also shows the transistors as Q1 and Q2, unlike the one in the question where they are Q8 and Q9, so it may be an early version that has been updated.
H: How can this RF amplifier increase the voltage above power supply level? It must be something simple I am not getting in electronics, but this UHF RF amplifier (of which China has helpfully erased the part number on what I take is the transistor) produces 13 watts of output power while only takes 12-15 volts of supply power. Now 13 watts at 50 ohms corresponds to 25.5 Vrms or 72.1 V peak-to-peak. All this is apparently possible without an inductor, at least I don't see one. How can this be done? AI: These are basically an RF amplifier, often multistage, with the input and output impedance matched, often to 50\$\Omega\$ but 75\$\Omega\$ is also common. They’re typically on a ceramic substrate and use stripline as well as conventional solenoid inductors. I’ve seen them without the cover on them, they definitely have inductors.
H: Keep a relay latched for 2-3 seconds using capacitor Building a simple prototype to keep the relay latched for a couple of seconds after power is removed - so essentially the power to relay acts as the Input signal. This means I have to store energy in a capacitor (to keep the relay latched) and possibly use resistor as well to control current in the circuit. Can you guys please help me figure out C and R values and how the components would link (C would be in parallel with the Relay but not sure about R)? Important things for me are: Relay should latch as soon as 12V power is applied Capacitor should charge within 500-700ms and be able to keep relay latched for 2-3 seconds. Since I have to charge capacitor quickly, I need to factor in-rush current so as not to damage power supply. Would really appreciate if you can share the calculations as well, so I can try with different relays and capacitors etc. The relay i have is Songle 12V (SRD-12VDC-SL-C), Datasheet: http://www.songlerelay.com/Public/Uploads/20161104/581c81ac16e36.pdf AI: Solve the standard RC discharge equation for the time constant RC. $$ \tau=RC=\frac{-t_{\text{dropout}}}{ln\left(\frac{V_{\text{dropout}}}{V_\text{initial}}\right)} $$ Use the initial coil voltage and the dropout voltage. The relay coil resistance is used for R. The relay inductance will not be significant enough to affect the result. For the circuit in the diagram, and for the relay in the OP, a 2200μF capacitor should work. The capacitor will charge to 12V less one diode drop. When the capacitor discharges, the initial coil voltage will be 12V less two diode drops. The relay datasheet will provide values for \$V_{\text{dropout}}\$ and coil resistance. Choose R1 to limit inrush current. If the relay drops out at a higher voltage you will need a higher capacitance. For example if the dropout is actually 50%, then a 6800μF capacitor is required. To include Simon Fitch's comment: "...to have \$C_1\$ charge within 500ms: \$5R_1C_1≈0.5\$, or \$R_1≈0.1C_1\$. With \$C_1=2mF\$, that makes \$R_1≈50Ω\$, and inrush current will be \$I≈1250=0.24A\$" simulate this circuit – Schematic created using CircuitLab
H: PV Microinverter Topology Explanation Can anyone explain how this topology works and/or point me to any literature where it is discussed? I've found Microchip's application notes on single stage interleaved flybacks with SCR or MOSFET unfolding bridges and have a pretty good understanding of those. My best guess is this is also a single stage inverter (boost and dc to ac inversion is happening with the full bridge and transformer). I assume a full bridge is used for good transformer utilization while keeping device voltage ratings low. I'm having trouble though understanding how what I assume would be chopped up AC gets coupled to the grid through the dual MOSFET and capacitor bridge. This paper shows something similar (full dual switch bridge instead of caps on bottom) in section III C but I haven't tracked down the references where it is discussed (yet) to see if they could fill in the gaps for me. Image created by me based off of a tear down of a broken unit. Note there are some x caps and common mode chokes between the output dual switch/cap circuit and ac connection not detailed here which I assume isn't meaningful for the topology and only for EMC. Lots of assumptions. Edit: Including annotated photo of PCB. No components on bottom side. AI: This is an interesting approach. I tried to implement a simplified square wave version in the simulator. For the demonstarion, a slow 1 kHz generator V2 controls the rectification direction of the secondary side of the main transformer XFMR3. To feed the grid, one would use 50 or 60 Hz here, sync to the grid. In a real world circuit the primary voltage would be modulated with a sine wave amplitude of the same frequency and phase as V2 to avoid the large currents during polarity change, that we see here. The modulation would produce a proper output power factor. C1 and C2 filter out the "high" switching frequency of V1, they are not DC bulk capacitors. R1 is just an output load. Please excuse the use of so many transformers and weird switches here, I just had no better idea at the moment. simulate this circuit – Schematic created using CircuitLab
H: How is a resonant bandpass filter similar/different from a damped mass-spring oscillator? They seem to behave different in testing Background I am using resonant bandpass filters as musical oscillators. One can excite an array of them at harmonic frequencies and given Q values for a note by, for example, running a burst of noise through them. I thought intuitively that an array of damped mass-spring oscillators tuned to the same Q and frequencies should perform the same as the resonant bandpass array. The result is they behave similarly but also differently in some ways. Damped Mass-Spring Oscillator I set up some with the following code, where instead of running the audio input through the bandpasses directly as input samples, I converted the input exciter audio into force and then used that to drive the mass-spring oscillators. I thought this would be the Newtonian way to handle this in theory. (Correct?) double processNextSample(double sampleInput) { //SOLVE DRIVING FORCE BEING PUT INTO IT FROM EXCITER SIGNAL in_1 = in_0; in_0 = sampleInput; inVel_1 = inVel_0; inVel_0 = (in_0 - in_1) * sampleRate; inAcc_0 = (inVel_0 - inVel_1) * sampleRate; F_input = inAcc_0 * oscMass; //use imaginary mass as 1 kg to keep amplitude the same //PROTECT AGAINST HIGH FREQUENCIES DUE TO INSTABILITY if (springFreq > 21000 || springFreq > 0.08 * sampleRate) { return 0; } //SOLVE MOTION OF DAMPED MASS-SPRING OSCILLATOR double F_dampedSpring = (springK * currentPos) + (dampCoeff * currentVel); currentForce = F_input - F_dampedSpring; currentVel += currentForce * deltaTime; currentPos += currentVel * deltaTime; return currentPos; Similarities/Differences This creates a similar effect in that I can get the expected resonances of frequencies and the musical note comes through the same at the same amplitude. There are two main differences: 1) Stability It is far less stable. I have to limit the frequencies relative to the sample rate as at higher frequencies it is failing. I believe it is going into NaN and inf territory easily. I am not sure why. Perhaps the input force or stepwise position/velocity solution is too crude and discontinuities are resulting in massive forces randomly? Whereas the filter (using this one) handles this with better math somehow? Or perhaps it is because as in point (2) below, it is letting high freqs through, and being forced into very rapid motion obviously then, the damping term is getting too big and becoming problematic at the sample rate with these high freqs as it is not parametized for this purpose, and pushing it into error. 2) Frequency Response It sounds like it lets all the high frequencies from my exciter noise bursts through it completely, whereas the resonant bandpass filters these out. ie. If I have a single mode (bandpass or oscillator) at 80 hz, with the bandpass, I only hear ever sound around 80 hz (it filters above and below). With the oscillator, I hear the full high frequency spectrum of the burst of sound as it goes through. Not sure about the lows, but the highs are obviously passing through. Questions Based on this experiment, it seems the damped harmonic oscillator is not equivalent to the resonant bandpass. What is the harmonic oscillator equivalent to then? Is it a resonant high pass filter? What would be the mechanical/Newtonian equivalent to the resonant bandpass if one exists? Why also (in layman's terms) is the harmonic oscillator so unstable compared to the filter? Thanks for any thoughts or ideas. EDIT Based on replies and comments so far that the harmonic oscillator should either be certainly be identical to the bandpass or work as a resonant low pass filter (not sure which one for sure still), then I must presume the burst of noise I am getting out of the harmonic oscillator on excitation is not the exciter noise passing through (not a high pass filter), but rather a sample rate related quantization error in my force conversion code which is creating a new noise burst. I didn't think of that possibility. Thanks for the feedback. AI: You are working in the domain of DSP (digital signal processing), where time is quantized, and usually value as well. The value quantization of a double is pretty modest, of course. Typically, DSP equations are developed without having to apply value quantization, then adding it in later, as a normally-distributed noise due to rounding error. Rounding errors have the distinction of being consistent based on input plus state values, which can result in instability, whether dithering between adjacent values, or divergence outright, so this does still need to be accounted for. It's probably fine here. Anyway, the domain of quantized-time systems, is the Z transform, analogous to the Fourier transform of continuous-time systems. In the Z domain, stability is expressed as poles laying within the unit circle; which indeed maps to the equivalent stability criterion in the Fourier domain, poles in the left half-plane. Put most simply, your problem is most likely that, by pushing the resonant frequency too close to the sample rate (or rather the Nyquist rate Fs/2), poles are pushed outside of the unit circle and divergence ensues. I say "simply", but analytical control theory is a rather high-level topic. I don't intend to go into an explanation or derivation of this here -- more to say, an explanation exists, and these are the keywords and topics you will find it in. There is also, somewhat separately, the matter of numerical stability; we can model a physical system, having some (continuous-time) differential equations, as some (quantized-time) difference equations, basically substituting \$dt \mapsto \Delta t\$; but the exact way in which we do that, affects the stability of the system, particularly as we vary the dynamics of the system with respect to its sample rate (or when the sample rate itself is variable). The trivial substitution is more-or-less Newton integration, but other rules can be applied: the trapezoidal rule; Adams-Bashforth; Runge-Kutta methods; etc. When \$\Delta t\$ is fixed, we can apply these back to DSP systems, and get a somewhat different mapping of differential to difference equations, and different stability based on the initial parameters; though the stability criterion of the final, actual, time-stepping equations is still necessarily present of course. Numerical stability applies when doing general-purpose simulations of these systems; when we approximate an RLC circuit in SPICE, we're applying discrete-time approximations (specifically, SPICE uses a variable timestep, and trapezoidal or R-K methods), and we get consequences such as anomalous energy loss -- or gain -- in a high-Q LC resonator, for example, depending on integration method and tolerances.
H: Diode in reverse lights up LED I have small issue with my circuit that I can't wrap my head around. I have made a custom PCB powered both by USB and battery. I put a diode between them to block battery power from flowing into USB when both are connected. This diode doesn't work as intended because when I plug in the battery without USB power, the USB_LED that I put with a 10k resistor in series still lights up (albeit dimmer.) There's 1.8 V across the diode in reverse and 2.2V across the LED. (The sum equals 3.9V of my battery voltage.Can you confirm this?) I was expecting to see 3.9V across the diode in reverse since I expected the diode to work as open circuit and certainly not for the LED to light up. Can you tell me like I am 10 years old how this is possible, with Ohm's law and all that? I don't get it. I specifically chose a B160 diode datasheet because it was supposed to have under 200na reverse current for my battery voltage according to the datasheet. I also tried a 1N5817 diode and got similar results. I noticed that when I solder the diode the LED light gets stronger, which is in line with the reverse current temperature behavior that's in the graph. -How can I calculate the leakage current with Ohm's low with diode and LED voltage values? How does the LED light up with that small of a current? What diode should I use to have nano amp level reverse current ? Here's the simplified power supply schematic with the 3.3V converter on the right: AI: Schottky diodes have a lot of reverse leakage, and modern LEDs can emit visible light without much current. 1N5817 has more reverse leakage than 1N5819 (but also less forward voltage when conducting). If you replaced it with a 1N4004 it would have less leakage but higher forward drop, as per the Shockley equation. A simulation shows typical current of 30uA+ at room temperature, which is plenty to get visible light. The Vishay datasheet 1N5817 guarantee is only that it is less than 1mA at 25°C and 10mA at 100°C! The Diodes Inc. B120 datasheet you linked guarantees a but slightly lower 0.5mA at 25°C. So it's certainly well within spec. In conclusion, I don't think there's anything wrong with your circuit, it's behaving as expected.
H: Can TSOP1130 work also at 850mn? In the sheet, they test it with a 940nm IR LED. There are no clear specifications for the wavelength (range) it works with. But I have 850nm LEDs (powerful ones). Will it work the same, or will it perceive a weaker signal? AI: From the question: But I have 850nm LEDs (powerful ones). Will it work the same, or will it perceive a weaker signal? The Vishay TSOP1130 is reported as obsolete, and can't find a datasheet for it on the Vishay site. In the datasheet found on a different site there is Figure 12. Relative Spectral Sensitivity vs. Wavelength: So, with a 850 nm LED expect the TSOP1130 to perceive a weaker signal which is approx 30% of that compared to using the 950 nm wavelength used for the characterisation in the datashseet.
H: Can TCC-100/100I Series Industrial RS-232 to RS-422/485 converters with optional 2 kV isolation convert RS485 to RS232? I want to convert RS485 to RS232 to connect it to a Moxa Nport 5610 - 16 port RS-232 Ethernet serial device server. Can I use a TCC-100/100I Series Industrial RS-232 to RS-422/485 converter to convert RS-485 to RS-232? AI: Since both interfaces are bidirectional, yes.
H: Calculation of the cross section of the busbar in the switchgear To calculate the cross-sectional area of the copper busbar, it is necessary to calculate the permissible current of the busbar. One of the correction coefficients that must be multiplied in this current is "the copper busbar surface temperature correction coefficient". How should I find this coefficient? Is there a table for this purpose? Icont = Itable × k1 × k2 × k3 × k4 × k5 Calculation of the cross-sectional area of the tire The calculation ofthe permissible current that can pass through the busbar is done using the following formula and applying correction coefficients: (1-6) Icont = Itable × k1 × k2 × k3 × k4 × k5 Icont: allowed continuous busbar current (final) Itable: allowed continuous busbar current according to table (6-9) or (6-10) k1: the correction factor related to the conductivity of the busbar if the conductivity of the copper busbar used is not 56 m/Ωmm2 or the conductivity of the aluminum busbar is not 35.1 m/Ωmm2. Figure (6-11) k2: correction factor related to ambient temperature other than 35 degrees Celsius and tire surface temperature other than 65 degrees Celsius (Figure 12-6) k3: correction factor related to the installation of tires in a flat or vertical installation with a length of more than 2 meters (Table 7-6) k4: correction factor related to skin effect in alternating current up to 60HZ according to the layout of the busbars, if no branch is taken at a distance of more than 2 meters from the busbar (Figures 13-6 and 14-6) k5: the correction factor related to the reduction of the current capacity of the tire at a height of more than 1000 meters above sea level (Table 8-6) AI: Assuming you are asking about the k2 factor, first order power dissipation (ignoring radiation, conduction, etc) is driven by the temperature difference between the busbar and ambient. The standard tables often (but not always) assume 35C ambient, and a maximum busbar surface temperature of 65C. If you can allow higher surface temperatures (ignoring burn risk) or lower ambient (reliable cooling or simply cold environment) the busbar will be able to carry more current. If the opposite is the case, capacity will be less. Again to a first order, the power dissipation is proportional to busbar-ambient delta, so if the tables assume a 30C delta, then limiting surface temperature to 55C would equate to a correct factor of 0.67, whilst allowing 75C would give a factor of 1.33. However, most tables will include those correction factors under a range of conditions.
H: Why aren't baluns needed for a monopole to unbalanced coax connection? I've asked before regarding dipoles connected to unbalanced lines. From what I gathered, the current in the inner shield when reaching the dipole (in transmitting case) can either go into the pole connected to the shield and also the shields outer skin (I'm assuming this dependacy is related to the two impedances), the current that doesn't match the inner current gets radiated. Now the question I specifically ask is, if this is correct, then why, when using a monopole, does the current in the inner shield not travel entirely down the outer shield, since none can travel to the pole since there is none in a monopole case. I did a very crude test on a monopole and when I touched the feedline coax it didn't change the SWR at all across the frequencies. I'd guess this is proof the monopole doesn't cause outer shield currents - a very different story to my v dipole. AI: The antenna that you call a monopole may actually be a coaxial dipole simulate this circuit – Schematic created using CircuitLab I apologize for the very crude drawing. A coaxial dipole has two sections. One section is a single conductor arm. The other section has a central conductor with two coaxial conductors around it. Where the two sections meet, the two outer coaxial conductors are electrically bonded. One might think of this as a regular dipole where the feedline is snaked through one of the dipole arms. Current from the transmitter flows through both the inner conductor of the feedline and the outer conductor. When the junction of the two antenna sections is reached, current from the inner conductor continues into the simple conductor arm. Current from what was the coax outer conductor, and is now the "middle conductor" of the coaxial dipole antenna, turns around and flows on the outer coaxial conductor. The coaxial section of the antenna is typically shorter than the simple conductor section because the phase velocity of the wave is affected by transmission line effects in the coaxial section. why, when using a monopole, does the current in the inner shield not travel entirely down the outer shield, If this is indeed your antenna, the answer is simple. The current can only flow so far back down the outer "shield", because it it only extends so far, and is not connected to anything at one end. If you instead had a system like below, I believe current would flow further down the outside of the outer conductor. simulate this circuit
H: Base resistor on BC817-40 There are base resistor calculators and multiple questions are answered on the subject of base resistor calculation. However, the BC817-40 I am using from MCC is showing two different gains (HFE1 and HFE2). That is confusing me to do the calculation of the base resistor. I have 3 IR LEDs (Vishay TSAL4400) with 100 mA forward current that I want to drive using the BC817-40. I plan give 80 mA to each LED, so I need a total of 240 mA collector current. A microcontroller's GPIO (3.3 V) will be used to drive the transistor's base. Can you please help me find the right base resistor value for the BC817-40 to drive around 250 mA current? Voltage at the collector is 5 V; it will be used to drive the IR LED. AI: Given that CMOS logic outputs are current sources by design, I'd use no base resistor at all. This ensures that you'll get the highest possible base current to saturate that fairly small transistor. Many MCUs have variable GPIO output strength, so you could adjust between low/[medium]/high drive strength to fine-tune it. The worst-case condition is when things are cold, so you'd want to stick it in a regular household freezer at least, measure the IR LED current, and select the GPIO drive level that ensures that the LEDs get the current they need. Alternatively, use a Darlington connection. This will put little thermal load on the MCU pin drivers, and is meant to work across full temperature range that the transistors are rated for. simulate this circuit – Schematic created using CircuitLab
H: Varistor-based SPD device - how does it protect the load? These common SPD devices are put in the main panel of residences: https://aptghana.com/wp-content/uploads/2020/11/SPD-2.jpg. This basic circuit showcases how they are installed: simulate this circuit – Schematic created using CircuitLab They are put in parallel with the source and the installation, but somehow they are able to stabilize the the voltage seen by the installation in the cause of a transient source voltage peak. I researched a lot and, in all sources that I have read, it is mentioned that the working principle is that: when the varistor experiences a big voltage peak in its terminals, its "resistance" is lowered, causing a (almost) "short circuit". This "short circuit" ensures that the varistor will receive a large current, which results in the installation being protected (?) What I don't understand is: How exactly the fact that the resistance of the varistor is dropped has any effect on the voltage seen by the installation? The varistor is in parallel with the installation, so, theoretically, even if we replaced the varistor by a 0.000001 ohm resistor, the installation would still see the same voltage. The only difference would be that the resistor would receive a giant current, without any impact on the installation. In my mind, the only explanation that I can think of is: the parasitic resistance of the source/line is taken into consideration. When the resistance of the varistor is lowered, it is expected that it will become so low that it will be comparable with the parasitic resistances of the source/line, thus causing a voltage divider and, therefore, causing part of the voltage to be "absorbed" by the parasitic resistance of the source. In this scenario, the voltage experienced by the installation would indeed be reduced. However, I was not able to find this explanation anywhere, and relying on the parasitic resistences of the source/line seems weird to me. Could anyone clarify it? Thanks! AI: Right, mains is not an ideal source. There are two dominant components: resistance and inductance. A typical residential circuit might have an impedance of 50mΩ or thereabouts, which can be mostly resistance, or inductance, but probably a mixture of both applies. Consider a long wired circuit: a standard US 120V 60Hz 15A circuit uses 14 AWG wire, and if it's 50 ft. from the panel, the run adds 250mΩ, and maybe 5-10 µH of stray inductance. Inductance depends modestly on wire length alone, but compounds particularly when wound up into a coil. Most likely, significant inductance arises when a circuit passes through one or more transformers. 50mΩ reactance at 60Hz is 130µH, a typical leakage inductance for a 1kVA control transformer (commonly used in industrial equipment, where 120V is desirable for control circuitry, while the main supply is 240-480V). If we take 100µH as a typical source inductance, then if we consider a 50µs surge of say 2kV peak, the flux under the curve is of the order (2kV)(50µs) = 100mWb, and the change in current, (100mWb)/(100µH) = 1kA. Indeed, a typical combined-wave generator, used for testing surge immunity under regulations such as IEC 61000-4-5, emits a 8/20µs (that's a 10-90% risetime of 8µs, and a FWHM of 20µs) pulse into a near-short-circuit load, and a 1.2/50µs pulse into a near-open-circuit load (the time varies with load impedance, thus, "combined" wave). The effective impedance of this generator, the ratio between open-circuit peak voltage and short-circuit peak current, is apparently 2 ohms -- conveniently, we also found 2kV and 1kA peak based on rough assumptions, how about that? As it happens, the CWG equivalent circuit uses typical values on the order of 8 µF, 3 µH and 2Ω, so we expect significant attenuation in a real circuit when values exceed these amounts. Such capacitors might be typical of passive power factor correction (capacitors balancing the inductive load of motors and transformers on site), stray wiring (again, adding up some 10s µH is easy enough across a building), and internal resistance of the equipment (small power supplies, with common-mode chokes of several ohms resistance, can use smaller MOVs, or potentially none at all). As such, actual effects may vary. Perhaps the EUT (equipment under test) is near another SPD already. In that case, the Thevenin source impedance can be much smaller: less wiring distance between means less resistance and stray inductance, and the clamping impedance of the SPD itself is quite low during the pulse. But the pulse is a lower amplitude, maybe 800V, instead of the unlimited 1.5kV entering the system (say from distant induced lightning strike). Induced or direct lightning strike itself, is handled by the mains distribution network, with SPDs installed periodically along distribution lines. Above-ground SPDs are easy to spot: the insulators are colored and shaped differently, and there is a noticeable ground wire/rod connected to the insulators' mounting bracket. These reduce the surge on e.g. 4.8kV AC lines to a peak of maybe 20kV, as measured some 100s of m away from the strike. (Closer to the strike, direct EMP is far stronger, and if you're in that zone... expect destruction regardless; EMC standards consider direct hits rare enough to not include them in commercial testing.) Eventually, the circuit routes to a distribution transformer ("pole pig" or pad type), some filtering occurs, rounding off the peak (transformers are also filter networks of a sort), and if the polarities work out, saturation may clip the pulse shorter as well. Most surge is therefore reduced under a kV by the time it reaches a customer, with worst-case figures in the low kV. 2.5kV line-to-ground and 1.5kV line-to-line are typical test levels.
H: Does speeding up a diesel generator result in a better load response? If I speed up my prime mover (500 kW CAT diesel generator) from 60 to 61 and 62 Hz 1800/1830/1860 rpm and hit it with 25/50/75/100 percent load steps, how much, if any kW/HP is gained? I am simply trying to get more HP for a temporary transient about to hit the bus. How much will speeding up the gen set gain me? Anything? AI: Higher RPM will proportionally raise the peak power a diesel engine can deliver - up to a point, of course. But 62Hz is just 3% more than 60Hz, so it's not very relevant. You can get similar variation with intake air temperature, humidity and pressure changes. What you really need is energy storage. On the mechanical side, that would be a direct-coupled or geared flywheel. On the mechano-electrical side, that would be an external motor-generator attached as a load, driving the flywheel. The motor/gen controller would dump the flywheel energy on line sags, and absorb it to re-spin the flywheel during light load.
H: What does (DW) mean in the TI equivalent of SOIC (16) package? TI datasheet says: SOIC-16 Wide Body (DW) and Extra-Wide Body (DWW) Package Options It doesn't mention what DW stands for. I am annoyed when manufacturers come up with their own naming conventions without explaining clearly the accronymns or the logic behind the naming. How does one remember/navigate these part numbers while selecting the right part for your application if the naming convention is not standard or clear? AI: Chapter 1 of the data sheet gives us a clue: SOIC-16 Wide Body (DW) and Extra-Wide Body (DWW) Package Options So the "W" stands for "wide", and "WW" for "extra wide". It is the same pattern as clothing, where "X" stands for "extra" and "XX" for "extra extra". X-D The "D" might stand for "dual-in-line".
H: Altium Layer Stackup for Aluminium (IMS) PCB I am needed to do a single layer PCB in Altium. I have no routing on the bottom layer but in the layer stack manager, I am not able to remove the bottom layer copper. Also, I can't choose or find any aluminium core. This is my current stackup: For aluminium based, is that bottom layer copper there with no connection naturally? AI: For aluminium based, is that bottom layer copper there with no connection naturally? You don't have to make anything on the bottom layer, even if it's there and non-removable. All you need to do is to generate the necessary gerbers (top layer i.e. copper, mask, paste, and silkscreen, and also the drills and board extents), define the PCB properties (thickness, type & material, copper weight, colours, tolerances etc) in a document, pack everything together and send to the manufacturer.
H: Tri-state input to three logic states I'm looking for an IC or small circuit that implements the following truth table: Input Output 1 Output 2 H H L L L H Z H H Is this possible? The purpose is to keep an SPI SS signal pulled high for two devices when the input is high-Z, and to otherwise select one of two devices. It would be possible to simply use an inverter but then one of the SS signals would always be active. AI: If there's nothing else connected to the line, a circuit like this will do: simulate this circuit – Schematic created using CircuitLab Run simulation to see how it behaves. Note that comparators are required, because the logic levels are non-standard. A simpler transistor-based version could be used, but probably takes as many or more components, even if resistor packs are considered. If IN is not allowed to float (something else is pulling on it), or must meet valid logic threshold voltage otherwise (because other CMOS/TTL inputs are connected), another solution is required, or no solution may be possible. I am assuming the tristate pin is available in isolation by itself.
H: Capacitor test mode in ICT My ICT machine has two capacitor measurement modes on the PCB: Mode 1: Use DC source test method (Figure 1) Mode 2: Use AC source test method (Figure 2) Could anyone please help me with the questions: What is the importance of each mode? When do we use mode 1? When do we use mode 2? It is best if we could compare these two modes. AI: As stated in the material you quote, DC source is used for measuring 'large' capacitors. AC impedance is therefore used for measuring 'small' capacitors. Check the ICT machine specifications to see whether the ranges overlap in 'medium sized' capacitors, where you could use either method.
H: Using ARM Cortex on breadboard I want to know if can I use an ARM-Cortex-M4 processor directly on a breadboard and then create a microcontroller on a breadboard. I already tried searching google but didn't find anything. Please note that I am taking about "creating" a microcontroller on a breadboard and NOT "using" a microcontroller on a breadboard. AI: You're confusing a couple of things here. Arm themselves sell Microprocessor core architectures and core layouts for putting in an IC. You need to have access to a chip factory and extensive tooling to take one of these cores, and make a microcontroller out of it. Breadboard doesn't come into play anywhere there. You need to do this on a chip wafer. You can't even buy such a core implemented on an IC – it would be useless, because you need to add things like power distribution, RAM, and internal buses to it before it can function as a component. It's physically impossible to even route enough signals for the buses internal to such a microcontroller on a breadboard. Also, breadboard has very bad electrical properties, so that this wouldn't work for a host of other problems, starting with the fact that you need to add strong drivers to your chip in order to even be able to drive the capacitive load a breadboard line is.
H: Puzzling 'conductor' in a bathroom scale Hoping to scavenge the LCD from an old bathroom scale for another project, I took it apart. There were four sensors -- one in each 'foot' of the scale -- all wired to a small circuit board whose back half is shown below, with the neat line of traces near the top. Sitting atop these was the small, rubbery rectangle that you see here, with its long thin edge touching the traces, and its other edge touching the LCD glass. If you look closely above the jaws of the crescent wrench, you can see some milky-grayish rectangles on the LCD; those are where the other long thin edge of the rubber piece was sitting, and I assume they are the 'connections' for the LCD. I'm therefore guessing that this rubber thing is somehow connecting the board-traces to the gray "traces" on the LCD. Can this be true? Is this a standard electronic part? I can't see any wires embedded in the rubber (but my eyes aren't great), and the alignment of the parts was maintained by what seemed like a somewhat sloppy and loose plastic assembly, which seems to belie the idea of tiny wires making the connection. Can anyone explain? (I've abandoned the idea of using this LCD, obviously, but I wish I could find something comparable, one with pins I can attach to...even AliExpress hasn't helped; It's the large size of the display that I need!) AI: The rubber strip is known as a zebra or elastomeric connector. It has alternating strips of insulating plastic and carbon-impregnated conducting plastic arranged in vertical stripes along the width of the piece in the photo. The pitch of the stripes is finer than the PCB pad spacing so that alignment is not too critical. Figure 1. LCD stack-up. Image source: Fujipoly. The LCD uses transparent conductors on one of the glass layers. These extend out to the edge of that layer which itself extends out past the edge of any components below it so that the conductors form an edge connector. Gentle pressure is applied to the stack - usually by means of a set of plastic latches - to make the connection.
H: Power Supply for STM32G071CxU microcontroller (VDD/VDDA vs VREF+ for analog peripherals) I am currently working on a STM32G071C8U7 microcontroller schematic and have some questions regarding the decoupling and general connectivity of the power supply pins. First I am presented with this diagram that clearly indicates how VDD/VDDA should be decoupled. However, I am not sure about VREF+/VREF decoupling capacitors. Is 1 uF recommended regardless of the intended use of VREF+ pin and 100 nF is only recommended if an external VREF is used? If I understand this correctly, the analog power supply (VDDA) for ADC/DAC does not provide any reference and thus will not work if VREF+ is left floating with a 1 uF tied to ground. Based on that, if VREF+ is not connected to any other external voltage source it should be tied to VDD/VDDA, but decoupled with 1 uF (+ 100nF potentially). Is my thought process correct? Lastly, I am wondering what is the best known practice for this type of power supply configuration to improve power supply to analog peripherals or is it only gonna get as good as my VDD/VDDA decoupling capacitors and their placement relative to its pin/s. NOTE: on this particular MCU part, VREF+ is not bonded with VDD/VDDA internally and has its own dedicated external pin. Please find some additional documentation that I was able to extract from few documents: In this diagram it appears that VREF+ is not even connected to VDD/VDDA? AI: Data sheets and reference designs are just suggestions. Everywhere (AN5690, Discovery schematics) there are two caps shown on VREF+ pin, but sometimes there is 1uF and 100nF, sometimes 1uF and 10nF. The actual bypassing also depends on if you plan to use the VREF+ pin as output or input, and if you are going to use it as input, then it is about if you connect it directly to VDD on PCB, or to some other supply for the reference voltage. So if you don't know beforehand, just draw two caps on the PCB. Then validate your design how many caps you really need later. Or just leave in both caps as it likely does not matter if it is not required for operation. The point is, you can apply as much or as little bypassing as you need for your design to work with good margin. Apparently not, because you can either connect it to VDD/VDDA yourself, or turn on an internal reference so the pin is an output and pin must be connected to caps only. Eval boards may be universal and the VREF+ may be connected to a jumper or solder bridge, which allows you to decide how to connect the reference pin, by moving jumper blocks around or soldering or unsolderint some blobs on the PCB. That can't be seen in the schematics as presented where the VREF+ is really going or is it left unconnected.
H: Highest speed of operation [BJT vs MOSFET] This question was asked in a competitive exam in India. Which of the following offers highest speed of operation? (a) BJT (b) FET (c) Enhancement Mode MOSFET (d) Depletion Mode MOSFET As far as I know, the speed of operation depends on application. In some applications, MOSFET is faster that BJT, whereas in some other applications BJT beats MOSFET. So this questions appears little incomplete to me. The paper setter has given (c) as the correct option. I wish to challenge this question. Please let me know your views regarding this question. References to any standard textbook will be highly appreciated. Thanks a lot! AI: I would answer (c) because I would think that's what they're looking for but it's a stupid question. You can make SiGe bipolar transistors (on a chip) with ft's approaching 100GHz, and 4000 series CMOS (enhancement mode MOSFETS) that struggle to switch at 5MHz with low supply voltage. Eg. 4013B
H: KVL on circuit with transistor I have a question regarding applying KVL in a circuit with a npn-transistor. Unfortunately this is badly documented in the textbook, and I couldn't find the answer so far. For the circuit below, it was given that V_BE = 0.7, and from this V_CE = 0.8V is derived. This was derived from V_DD = V_CE + V_BE. This is where my confusion stems from, wouldn't using KVL give the equation V_DD = V_CE? I don't see how to use KVL correctly in this circuit. AI: We have two transistors connected in series between the positive terminal of a supply voltage and the GND. The upper one Q2 is "working" as a diode. Thus, for Q2 we have Vce2 = Vbe2 = 0.7V Because the Q2 base terminal is directly connected to the collector. And from KVL we have this: $$ V_{DD} = V_{BE2} + V_{CE1} $$ So we can easily find the Vce1 value $$V_{CE1} = V_{DD} - V_{BE2} = 1.5V - 0.7V = 0.8V$$ And we done. simulate this circuit – Schematic created using CircuitLab
H: Choosing Contactors for an EV Battery Pack I’m designing a battery pack and am uncertain about which contactor to select. Many electric vehicles (EVs) use three contactors: one each in the positive terminal, negative terminal, and precharge circuits. I believe we can use one type of contactor for all three positions. The current rating of the contactor should match the rated current of the pack. Let's assume we have 3 modules, each with a 6s20p configuration and our pack voltage is 400V. The "Max. Discharge Current" for each cell is 10,000 mA for continuous discharge and 15,000 mA for non-continuous discharge. My question is: which value should we use to calculate the rated current for the contactor? If we use "Max. Discharge Current for continuous discharge" of 10,000 mA for our calculation, then we would have 10 A * 20 = 200 A. Does this imply that our contactor should have a rated current of 200 A? I've noticed a few Tesla models using the Gigavac 200, which is rated for 500+ Amps at 12-800 Vdc. Why is it specified as 500+ and not an exact number? AI: which value should we use to calculate the rated current for the contactor? Does this imply that our contactor should have a rated current of 200 A? You need to ensure that your contactor will survive whatever current might cross it (including worst case scenario). If you have a fuse or another overcurrent between the batteries and the contactor, then you need to choose a contactor that supports whatever current can pass through your overcurrent protection, including : high currents for short duration (a thermal fuse for example won't blow on a short high current pulse) somewhat lower currents for somewhat longer durations (most fuses allow several time the nominal current for a few seconds) a bit more than the blowing current of the fuse for unlimited duration (nb : most fuses don't blow at there nominal current, but need some higher current before really flowing) I've noticed a few Tesla models using the Gigavac 200, which is rated for 500+ Amps at 12-800 Vdc. Why is it specified as 500+ and not an exact number? If you look at the datasheet of the Gigavac 200, you have this very interesting figure : You can sustain 500A for about 600 seconds (ie 10 minutes). The current you can sustain for infinite time seems more like 300A. On the other hand, an inrush current of 2000A is OK for 20 seconds. So there is no hard limit for the maximum peak current (or at least I didn't found it by quickly reading through the datasheet). The maximum current depends mainly on how long you want to sustain it (and how well you can cool down the contactor). So the 500+ A is just a marketing value for a typical use case (for a car, if you can be full power for 10 minutes, it's more than sufficient, usually you only accelerate strongly for a few tens of seconds). So to answer your main question about choosing your contactor : what matters is not the current rating of your battery itself, but of the current protection that is (hopefully) either included in the battery or put at its output. Your switch must be rated to support all (current, duration) pairs your current limiting circuit can let through. So find the curve current-vs-duration of your battery protection, and compare it with the one of your contactor. Take the worst case scenario for both (ie highest (current,time) for the protection, and lowest resistance for the contactor. If the curve for the contactor is above the one for the protection with decent margin, you are fine. If not, choose another contactor. NB : a single figure for maximum current will not suffice to know if you are fine or not
H: How to simulate a 30 inch FR-4 lossy channel (20dB loss) using ltline symbol in LtSpice? For designing a high speed serdes system, I need to model a lossy channel of FR-4 substrate of length as 30 inch and 20dB attenuation. How can I implement it in LtSpice tool? AI: Use the lossy transmission line element: You'll need more parameters then the ones you mention, such as resistance and capactance and length. Use saturn PCB tool for PCB calcs. .model O1 LRTA(len=10 R=10m L=700n C=3p) More info here: https://ltwiki.org/files/LTspiceHelp.chm/html/O-device.htm
H: Matched junction S parameters This question was asked in a competitive exam in India. Two-port S matrix is assumed here. For matched junction S parameters, which of the following is correct ? (a) S11 not equal to 0 (b) S11 equal to 0 (c) S21 equal to 1 (d) |S21| equal to 0 As far as I know, matching mean "zero reflection". I am confused between options (b) and (c). S21 equal to 1 means forward transmission is 100% thereby implying that reflection at port 1 is zero (S11=0) However, option (b) gives S11 = 0 directly and so looks more appropriate. The paper setter has given option (c) as correct. I am thinking about challenging it. References to any standard textbook will be highly appreciated. Thanks a lot ! AI: As far as I know, matching mean "zero reflection". As you noted: It also means 100% power transfer from input to output. So either and both of b) and c) would be correct. If only one answer is allowed then there is a 50/50 chance of getting the mark for the question.
H: How to choose the proper value of resistors for current dividers Edit: The original version of this question is malformed. There is an edit further down below with an updated explanation of the issue I'm trying to solve. In a simple current divider, I know the ratio of the parallel resistors determines the current flowing through each parallel section of the circuit, however, I don't know which resistance fits best on a given circuit. Given a 5V/5A dc power supply, a ratio of 2/3 would mean that one circuit would get 2A and the other would get 3, but that would be true both with a 2Ω and 3Ω resistor and a 2kΩ and 3kΩ resistor. How is the proper resistance determined? Edit I see my misunderstanding. I made this question because I am working on a project where I have two loads that I have to connect in parallel, and each of them require different currents to go through them. I do not know their resistance, and it can probably vary, given that they are a microcontroller and an LCD. The load of the screen requires a power supply of 5V and at most 2A. The load of the microcontroller requires a 5V power supply capable of reaching, but not limited to, 3A. The problem is that I need to power both of these loads from the same source, and since one of them requires a higher load than what the other can take, I am worried I am going to damage the smaller load (the screen). I thought of using a current divider for this, but now it doesn't seem like the best idea. After searching some more I think a current limiter will do a better job at protecting the screen, but I am still unsure on how to go about this. This seems like should be on a different question though, so I'll write a new one. Thanks to everyone! AI: With a constant voltage source you could use any combination of resistors where their ratio is 2 to 3, for example you could use 40\$\Omega\$ and 90\$\Omega\$, but each different pair of values will give you different currents. They don't take the power supply current rating and divide it, rather the current in each will be the supply voltage divided by that resistance. In your circuits you have 2\$\Omega\$ and 3\$\Omega\$ in one, that will give not give you 3 A and 2 A, it will give you $$ \frac{5 V}{2\Omega} = 2.5 A $$ and $$ \frac{5 V}{3\Omega} = 1.667 A $$ for a total of 4.167 A The second circuit with the values in kilohms will give you the same values except in milliamps instead of amps. So the current you're dividing isn't the supply max current, it's the current the two resistors in parallel would draw. 2\$\Omega\$ in parallel with 3\$\Omega\$ is $$ \frac{2\Omega \times 3\Omega}{2\Omega + 3\Omega} = 1.2\Omega $$ and the total current would be $$ \frac{5 V}{1.2\Omega} = 4.167 A $$ so that matches what we got by adding the two currents. For a constant current source the currents would divide the constant current according to their ratios, but the voltage would adjust to keep the total current at a given value. Let's say you had a 5 A constant current supply, and used 2\$\Omega\$ and 3\$\Omega\$ resistors, the currents would be 3 A and 2 A, but the supply would adjust its voltage to be 6 V, and the calculations would be $$ \frac{6 V}{2\Omega} = 3 A $$ and $$ \frac{6 V}{3\Omega} = 2 A $$ and for the 2k\$\Omega\$ and 3k\$\Omega\$ resistors the currents would be the same, but the supply would need to output 6000 V. $$ \frac{6000 V}{2k\Omega} = 3 A $$ and $$ \frac{6000 V}{3k\Omega} = 2 A $$ Which might be a problem, because just as a constant voltage supply has a current limit, a constant current supply has a voltage limit, they will source a given current as long as the load resistance times that current doesn't exceed the voltage limit. For example, a 5 A CCS with a compliance voltage of 30 V would only source 5 A into a resistance if it is less than or equal to $$ \frac{30 V}{5 A} = 6\Omega $$
H: How to convert SR (NOR) latch into S'R' (NAND) latch What is the general rule of thumb to convert SR (NOR) latch into S'R' (NAND) latch? I only know the two inputs (S,R) need to be inverted. What about outputs (Q)? AI: For normal operation, the conversion is as you describe. A NOR ff has positive-true logic action (a high level asserts the output) and a NAND ff has negative-true inputs. In terms of input action, one inverter at each input effects the conversion. And you are correct about the outputs. Swapping them restores which output will go high when an input (before the added input inverter) goes high. What changes is what happens when both inputs are true. With a NOR ff, both outputs are low; with a NAND ff, both outputs are high. If you want the NAND circuit to behave like the NOT circuit in this state, then do not swap the outputs, and add an inverter to each one. Now you have completed the DeMorgan transformation of the circuit. https://en.wikipedia.org/wiki/De_Morgan%27s_laws#Engineering https://en.wikipedia.org/wiki/De_Morgan%27s_laws#Generalising_De_Morgan_duality
H: Wheatstone bridge connected to op-amp: confusion about Vout formula I am not sure when the correct formula for Vout is Vout = G(VA - VB) or Vout = -G(VA - VB). Can you explain the difference between these two examples? The circuits and the formulas provided are from my textbook, yet I struggle to understand how the formula for Vout ​is derived in each case. AI: As in Spehro's answer you need to assume that the circuit diagram examples are actually simplified versions of a bridge amplifier, some details are not being shown, (for e.g.: feedback, bias, gain resistors, etc.). In both examples it is shown that the resistive sensor located at the bottom right increases, as shown by the (R+dR) , so typically the voltage at Point A will be greater than point B. In bridge jargon with this arrangement you could label point A as the +Signal connection and point B as the -Signal connection. So starting with the example diagram 2 you have the +Signal output to +input of the amplifier and the -Signal to the -input of the amplifier, (++, --). Therefore the amplifier output would be G(VA-VB). In example diagram 1 you have the bridge outputs connected in reverse, the +Signal connects to the -input and the -Signal to the +input, (+-, -+). So in this case the amplifier output is also reversed giving the negated output of -G(VA-VB).
H: What is the "problem" with directly coupled amplifiers? As in the title, my question is basically about understanding what is the "problem" with directly-coupled (below, I will use dc for "directly-coupled" and DC for "direct current") amplifiers which necessitates the use of coupling amplifiers? Note that this question is being asked in the context of old-school discrete-circuit amplifiers, but I think that understanding that case will be helpful for me (and hopefully others) in deeply understanding the problem which differential pairs solve. Consider for example the figure below (from Gray, Hurst, Lewis, and Meyer, 5th edition): We see that the "classical" biasing arrangement for each stage (using the degeneration resistors at the emitters and voltage division at the bases) is used, and that the signal input flows through a few coupling capacitors. I should emphasize that this circuit is by no means optimal even under the discrete-circuit paradigm of the day. The point being made by the authors is around the discrete-circuit implementation using large resistors and caps whereas an integrated circuit implementation cannot (cannot use coupling caps, and so must face the direct-coupling problem head on rather than averting it by using coupling caps). I am in particular wondering about the need for the coupling caps in this question: My question is why these coupling capacitors are necessary, and I give the two suspicions I have below. If someone is able to explain if one or the other (or something else entirely) is correct, I would greatly appreciate it. Again, I am trying to understand why we can't get away without coupling capacitors (or differential pairs). (1) Not using coupling capacitors makes doing DC biasing hard, since then the biasing of a given stage is "connected" to an earlier stage. I am suspicious that this is the reason for the capacitors since it seems to me that, if we worked hard enough, we could overcome this by simply considering the path to the preceding transistor as being in parallel with the lower resistor in the voltage division circuit at each base and adjusting our biasing resistor values accordingly. (2) If there is some DC component to \$v_i\$ (or some DC noise injected at any subsequent stage's input, or referenced thereto) then without the coupling capacitors we would get a dc signal which messes with the circuit. This is very hazy in my head and I can't quite figure out what "messes with" ought to mean. I guess I can argue by superposition and say that this "propagating DC signal/offset" would ride on top of whatever the intended bias voltages are, thereby changing the bias point (and also therefore the relevant small-signal quantities) and therefore the amplification at the intended frequencies? In the end, I believe this question is related to my earlier one here which has a very nice answer from Andy, but I am still struggling to understand if the issue is about a biasing issue or a dc small signal, as per (1) and (2) above, respectively. As I write above, I suspect (2) is the issue, but I am not sure because Andy's very nice answer seems in retrospect to me to be concerned with the biasing at each stage, but perhaps I am misunderstanding. AI: The author's goal The textbook illustration presents a conceptual circuit diagram of an AC amplifier. That is why it consists of simplified amplifier stages with identical biasing circuits. As the OP also noted, the goal is probably more to show the idea than the most perfect implementation. So, we should not criticize and try to improve it but use it as it is. Basic idea AC amplifier Think of the coupling capacitors as "rechargeable batteries" that shift the voltage variations from the previous stage to the next stage. They do that by adding/subtracting their DC voltage in series manner to/from the voltage of the previous stage. From this point of view, biasing can be considered relative not to ground but to the previous step (i.e. each next step is "shifted" down or up relative to the previous one). You can think of the stage outputs and inputs as DC voltage sources with different voltages. If you connect them directly, equalizing currents will flow and their DC voltages will change. The role of the charged coupling capacitors is to equalize their DC voltages. For example, if the output voltage of the previous stage is 6 V and the input voltage of the next stage is 1 V, we must connect them through a 5 V charged coupling capacitor to "fill" the difference between them. The problem of the coupling capacitors is that at an insufficiently high frequency, their voltage will vary with that frequency; therefore these amplifiers are AC amplifiers. DC amplifier If we want them to be DC amplifiers, we need to connect their stages via "DC shifting elements". Such, for example, can be real voltage sources, but this is not convenient. Also, these can be resistors supplied by constant current sources; then the voltage drop across them will be constant (this trick is used by Widlar in his first op-amp). They can also be Zener diodes or "rubber Zener diodes" especially if supplied by a constant current. Differential amplifier Although the coupling techniques above with "hard-coupling" elements between single-ended stages are possible, they are rarely used because they transmit both the useful and undesired signals equally well. For example, if the coupling elements change their voltage even slightly, this is perceived as a useful signal and induces an offset voltage at the output. The trickier solution is to connect differential stages directly. But how is this possible? The clever trick on which the differential amplifier is based is that (in the so-called "common mode") its "internal ground" (the common emitter point in the case of a long-tailed pair) is not fixed but "movable". So instead of the output voltage of the previous stage going down to zero voltage ground, the differential stage raises its internal ground to the level of that voltage. Then, in the "differential mode", the input voltage(s) "wiggle" around this "shifted ground". CircuitLab experiments Let's bring this sterile textbook circuit diagram to life with the help of CircuitLab. Due to scaling it is quite reduced; so it is good to look at it in the editor. I would like to connect more meters, but the schematic would be cluttered; so I have put only two voltmeters in the output stage to show the output voltages. They also act as a 1 kΩ emitter resistor and 10 kΩ load resistor (I have given them such internal resistances in the parameters window). The resistance values are illustrative and roughly chosen for the purpose. I propose to conduct the experiments in the following order: Coupling capacitors Without "messing" DC input voltage: Let's first set the AC input voltage to zero (short the input) and observe the stage output voltages and base bias voltages by hovering the mouse over the circuit diagram. simulate this circuit – Schematic created using CircuitLab With "messing" DC input voltage: Let's now add a small 10 mV DC component V1 to the input (the OP's suggestion) and to the outputs (V2 and V3) of the two voltage-amplifying stages. We see that the coupling capacitors decrease by that much their voltage and nothing in the circuit changes. simulate this circuit "Coupling" voltmeters Before we replace the "soft" capacitors with "hard" voltage sources to make a DC amplifier, let's measure the voltages across them by replacing them with voltmeters. simulate this circuit Coupling batteries Without DC input voltage: Then we adjust the "rechargeable batteries" to these voltages and put them in place of the voltmeters. simulate this circuit With DC input voltage: Our "direct-coupled amplifier" is ready. If we now apply the same 10 mV DC input voltage as above, the output voltages across R11 and R12 change. simulate this circuit
H: Could a voltage divider with more than two resistors be used? As a step in an exercise, I'm trying to find voltages \$V_1\$ and \$V_2\$ below Could a voltage divider (with more than two resistors) be used here? So that, for example: $$V_2 = \frac{2\ \mathrm{k\Omega}}{2\ \mathrm{k\Omega}+1\ \mathrm{k\Omega}+2\ \mathrm{k\Omega}}\cdot 4\ \mathrm{V} = 1.6\ \mathrm{V}$$ AI: Yes. It does not matter if there is two, three or any other amount of resistors in series, you know all the resistances and voltages with Ohm's law and can use it as much as needed to find a formula for voltage divider with two or any other amount of resistors. But V2 will not be 1.6V. Otherwise, you are on the right track.
H: Half-bridge driver giving high-side gate full input voltage Using an IRS2513 half bridge driver tuned at 30 kHz with 1MBH60-100 1000 V, 60 A IGBTs, the driver is giving the high-side gate the full 24 V input voltage. I have included a picture of how I configured the driver and how the oscilloscope looked when measuring (measured from HO to GND and LO to GND). Why is this happening? The driver has an internal 15.4 V zener clamp. AI: Turns out my scope probes were placed incorrectly. I was measuring from HO to Gnd and should have been measuring from HO to VS.
H: Strange comparator behavior I am testing the following comparator schematic: simulate this circuit – Schematic created using CircuitLab I've replaced the original 280 kΩ resistor with R1 and trimpot R3. The hysteresis resistors are calculated to switch on at 30.78 V battery (2.54 V on + input) and switch off at 29.78 V (2.46 V on + input). Battery voltage at testing time was 28.89 V. So, I am pressing the button SW1 and turning the pot until comparator triggers ON. First I noticed some flicker on the output before switching to a solid ON. This is already strange, since hysteresis should have taken care of it. When I release the button the comparator stays ON for a random period of 5 to 15 seconds, then switches OFF. I thought the calculations were wrong, so I connected a voltmeter to TP1. The comparator switched ON at 2.63 V, not at the expected 2.54 V. With the button released the voltage dropped to 2.35 V and the comparator switched OFF immediately. I re-tested everything several times and the result is the same: with the voltmeter connected the circuit switches ON and OFF immediately when the button is pressed and released. There is no jitter when I rotate the potentiometer either. Without the voltmeter there is a jitter and then it stays ON for some time. Furthermore, connecting the voltmeter sometimes activates the output, sometimes not. Considering the very high voltmeter impedance this is very strange. Any ideas why this is happening and what I can do to fix it? UPDATE: The original circuit had two problems that combined to produce the observed strange behavior. First, I failed to account for 5V feedback voltage injected into the voltage divider via R4, R5. Accidentally, with selected R1-R3 it was enough to keep TP1 voltage at 2.457V, i.e. within millivolts of the OFF threshold. Connecting the voltmeter pushed this down enough to trip the comparator. This was fixed by changing R1 to 200k. Second, high resistor values made circuit susceptible to noise. This was fixed by adding 1nF capacitors to TP1 and 2.5V reference. I've used smaller values because they will remain in the circuit after the test button removed, so will affect the dynamic response. My thanks to @unawriter and @mosfet for helping me pinpoint these problems. UPDATE 2: After more testing it become apparent that while changing R1 did make the circuit working, the experimental resistance values and thresholds do not correspond to theoretical calculations. It seems the hysteresis formulas in comparator datasheet assume very low source impedance. In this case it is very high, so feedback chain significantly affects the thresholds. Maybe I have to make another question specifically for the calculations. AI: Use lower value resistors (order of 10s of kΩs). With the values you selected, everything is acting like an antenna, especially if this is on a breadboard. Also, your "very high voltmeter impedance" isn't very high compared to the resistances of your circuit.
H: Protecting parallel loads I have a project that has two separate sections that must be powered from the same power source. I do not know the resistance of said loads, but I know the power supply that each of them on their own would need. I apologize in advance for any ignorant mistakes you find in this question, I'm a software guy and I haven't done much beyond following Arduino schematics. The first one (L1) requires a 5 V power supply capable of reaching at least 3 A. The second one (L2) requires a 5 V power supply that is limited to 2 A. The pretty obvious conflict is that no power supply meets both requirements at once, so I need to use a power supply that can supply both loads at max. load and protect L2 so that it isn't accidentally overloaded. How can I do this? My first (rather uninformed) thought was to use a voltage divider, but after asking another question here I noticed that that is probably not going to work. I've searched some more around and I haven't been able to find anything that could work. I've thought of using a current limiter, but all I have found is about limiting the current to the whole circuit, and I am unsure of how it would behave in a parallel circuit Here's a diagram of how the project more or less looks: I've used resistors to represent the loads, but I do not know the resistance of each load (they might as well vary since they are a microcontroller and a screen). I need to somehow ensure that I can provide at least 3 A to L1, while ensuring that L2 never receives over 2 A. How can I go about this? I have thought about powering the screen from the microcontroller itself (so technically in series?) but I'm not sure that the microcontroller can provide enough current or at a close enough voltage to 5 V through its IO pins to power the screen, so I don't know. I don't want to risk anything breaking because the components are expensive so I'd rather figure things out in theory before attempting anything with the lab power supply. In detail, L1 is a Geekworm X635 board with a Raspberry Pi CM4 and an NVMe SSD, and L2 is just a generic LCD screen AI: You simply need to have a power supply that can supply enough current for both devices operating at their max current consumption state. L1 requires at least 3A. L2 draws, at most, 2A. Take the sum of these loads: 3A + 2A = 5A. You will need a power supply that can provide 5A @ 5V. A good engineer will go with something higher than 5A in a professional setting where reliability matters. In other words, The loads are only going to draw as much current as the need - not how much current the power supply can force down there proverbial throats. The max and min current specs are device specs. Not power supply specs.
H: Why is the output voltage across the load dropping a lot in this audio amplifier circuit? I am making an audio amplifier circuit in LTspice. This is the circuit that I've made: Notice that the input is 20 mV. The output plot is as follows: That is around the gain I want to obtain. The speaker that I will be using is an 8 Ω, 0.5 W speaker that can sustain a maximum 10 V peak-to-peak input. The problem is that as soon as I model this in LTspice, the voltage just drops too much and the shape distorts. Why is this happening and how do I fix it? Does it have anything to do with impedances? AI: There are other, better topologies for an audio amp, but what you have can be improved without starting over. As mentioned by JS, R10 and R11 are too large. Starting with your 4 V peak output voltage and the subsequent peak output current, take the transistor gain at that current (from the datasheet), divide that in half for operating margin, and use that to calculate the required base current. 100% of the base current comes from R10 and R11. You have two opamps plus a 1-transistor gain stage all for a total gain of 6. You can keep C1, eliminate U1 and Q1 (and its parts), and configure U3 for a gain of 6. There is no DC reference at the input to U1. Now that U1 is gone, add a 100K resistor from the U3 non-inverting input to GND. Delete R9, C2, and C4. I don't know why they are there, but they make things worse, The output stage needs a low driving point impedance, such as the direct output of an opamp. The way the circuit actually works, U3 does not provide any base current to the output transistors. It shunts current from R10 and R11 away from the output transistors. For example, in a positive half-cycle, R10 pulls up the Q2 base and sources its base current. for any particular instantaneous voltage, R10 pulls up the base until D1 starts to conduct, which is when the base is 0.7 V greater than the U3 output voltage. To pull the base down, the U3 output goes down, pulling down D1, which pulls down R10 and the Q2 base. Note that a 741 might not be able to provide enough output current to drive the output stage to the level you want. Still, you should be able to get things closer your desired output.
H: Trigger a 3 V line with 12 V I would like to trigger a 3 V line (a push dimmer circuit) with a touch button (which is powered by 12 V). Closing the 3 V will turn off (short press)/turn on (short press)/dim (long press) the LED strip. Here is my complete circuit: How can I connect the outgoing 12 V line from the touch button to close the 3 V line (the red question mark)? Can this be realized by a transistor, optocoupler, etc.? Hardware used: LED controller: GLEDOPTO GL-C-013P (zigbee (3 wire/2 wire 2in1)) LED Conttroller Touch button: Touch Button AI: If the gnds of 3V and 12V are or can be connected together (not isolated), you can use a NPN transistor like 3904 with 47k in base. Otherwise you have to use an optocoupler.
H: CMOS (Energy Supply of voltage) Can someone please explain why, when \$ln\$ is transitioning from low to high, the energy supplied is \$C_{vdd}\cdot V_{dd}^2\$? AI: The formula for energy stored in a capacitor is: $$ E = \frac{1}{2}CV^2 $$ In either state, one of those capacitors is fully charged, with energy \$\frac{1}{2}C{V_{DD}}^2\$ joules, and the other fully discharged, with 0J of energy. The total stored energy energy in both cases is therefore: $$ E_{TOTAL} = \frac{1}{2}C{V_{DD}}^2 $$ In other words, the net change in stored energy when state changes from top capacitor charged and bottom discharged, to top discharged and bottom charged is zero. You might think that the power supply doesn't need to provide any energy at all, since net stored energy hasn't changed, but you'd be wrong. There's a cost to charging a capacitor, and a cost to discharging it, because the charging path has resistance. The supply has non-zero internal resistance, and the MOSFETs have non-zero on-resistance, and all current that results from the charging and discharging of those capacitors will have to pass via these resistances. Consequently those resistances will dissipate power, and that energy is lost as heat. That energy is what the supply is delivering to the system during each transition between states. I'm going to ignore the supply's internal resistance, and assume that the only resistance in play is the MOSFET's own \$R_{DS(ON)}\$. Assuming we start with the upper capacitor (\$C_{VDD}\$) discharged (0V across it), and the lower one (\$C_{GND}\$) charged (the full \$V_{DD}\$ across it). When we switch on the lower MOSFET, to discharge \$C_{GND}\$ and charge \$C_{VDD}\$, the system looks like this: simulate this circuit – Schematic created using CircuitLab I've renamed stuff to make the algebra easier to write. The voltage across C1 is \$V_{C1} = V_S - V_X\$. The voltage across \$V_{C2} = V_X - 0V = V_X\$. The voltage across R is \$V_R = V_X - 0V = V_X\$. Supply voltage is fixed at \$V_S\$. I'm not sure that the following is the easiest or quickest proof. There's probably a more intuitive approach that I haven't spotted. We wish to find the energy delivered to the rest of the circuit by source VS. Energy is the time-integral of power, where power \$P=IV\$, the product of voltage across the recipient and current through it: $$ \begin{aligned} E &= \int^\infty_{t=0}{P \cdot dt} \\ \\ &= \int^\infty_{t=0}{IV_S \cdot dt} \\ \\ &= V_S\int^\infty_{t=0}{I \cdot dt} \\ \\ \end{aligned} $$ By Ohm's law, current \$I_1\$ through R is: $$ I_1 = \frac{V_X}{R} $$ Capacitor C1 current is: $$ \begin{aligned} I &= C_1 \frac{d(V_S-V_X)}{dt} \\ \\ &= C_1 \frac{V_S}{dt} - C_1 \frac{V_X}{dt} \\ \\ \end{aligned} $$ Remembering that \$V_S\$ is constant, so \$\frac{dV_S}{dt}=0\$, that simplifies to: $$ I = - C_1 \frac{dV_X}{dt} $$ Capacitor C2 current is: $$ I_2 = C_2\frac{dV_X}{dt} $$ With expressions for all three currents, we can combine them using KCL, which says: $$ \begin{aligned} I &= I_1 + I_2 \\ \\ - C_1 \frac{V_X}{dt} &= \frac{V_X}{R} + C_2\frac{dV_X}{dt} \\ \\ - \frac{dV_X}{dt}(C_1 + C_2) &= \frac{V_X}{R} \\ \\ \frac{dV_X}{dt} &= -\frac{V_X}{R(C_1+C_2)} \end{aligned} $$ Solving this differential equation for \$V_X\$ is fairly trivial, I won't show any working, just the solution: $$ V_X(t) = V_X(0)e^{-\frac{t}{R(C_1+C_2)}} $$ \$V_X(0)\$ is the initial potential of node X, at time \$t=0\$, which we know to be \$V_S\$, so this becomes: $$ V_X(t) = V_Se^{-\frac{t}{R(C_1+C_2)}} $$ You might recognise that last equation as the classic capacitor discharge formula. Interestingly, it has time constant \$R(C_1+C_2)\$, as if C1 and C2 were in parallel. What this tells us is that potential \$V_X\$ starts equal to \$V_S\$, and decays exponentially to zero, with a time constant of \$R(C_1+C_2)\$ seconds. Plug our expressions for \$V_X\$ and its derivative back into our equation for \$I\$: $$ \begin{aligned} I &= -C_1 \frac{dV_X}{dt} \\ \\ &= (-C_1)(-\frac{V_X}{R(C_1+C_2)}) \\ \\ &= \frac{C_1}{R(C_1+C_2)}V_X \\ \\ &= \frac{C_1}{R(C_1+C_2)}V_Se^{-\frac{t}{R(C_1+C_2)}} \\ \\ \end{aligned} $$ I'll substitute \$\tau=R(C_1+C_2)\$, just to tidy things up: $$ I = \frac{C_1}{\tau}V_Se^{-\frac{t}{\tau}} $$ We're ready to calculate energy \$E\$: $$ \begin{aligned} E &= V_S\int^\infty_{t=0}{I \cdot dt} \\ \\ &= V_S\int^\infty_{t=0}{\frac{C_1}{\tau}V_Se^{-\frac{t}{\tau}} \cdot dt} \\ \\ &= {V_S}^2\frac{C_1}{\tau}\int^\infty_{t=0}{e^{-\frac{t}{\tau}} \cdot dt} \\ \\ &= {V_S}^2\frac{C_1}{\tau}\left[ -\tau e^{-\frac{t}{\tau}} \right]^\infty_{t=0} \\ \\ &= {V_S}^2C_1\left[ -e^{-\frac{t}{\tau}} \right]^\infty_{t=0} \\ \\ &= {V_S}^2C_1\left[\vphantom{\frac{}{}} (0) - (-1) \right] \\ \\ &= {V_S}^2C_1 \end{aligned} $$ That's the energy supplied by voltage source VS (\$V_{DD}\$) to the entire system, during a transition from output high to output low, (or input low to high, since this is an inverter). You can repeat the procedure for a transition from low to high, and I'm sure you'll obtain a similar result, but with C2 (\$C_{GND}\$) in the expression instead.
H: Is 3004M1C a photodiode or a phototransistor? The datasheet describes the 3004M1C as a "high speed and high sensitive PIN photodiode". Distributors also label it as a photodiode. However, the datasheet mentions "Collector-Emitter Breakdown Voltage", "Emitter-Collector Breakdown Voltage", "Collector-Emitter Saturation Voltage", which seems to make sense only for phototransistors. I also tried some checks suggested in "is it a photodiode or a phototransistor?": The open circuit voltage of the 3004M1C is 0 when I point a bright light to it. Also, the diode tester doesn't read anything. So, is the 3004M1C a phototransistor mislabeled as a photodiode? AI: TLDR: Data sheet is not to be trusted. Page 1 describes a photodiode with words only. Page 2 refers to the device as a "LED" Page 2 suggests it is made of Silicon (no LEDs are made of silicon), but photodiodes and phototransistors are mostly silicon-based. Page 3 tells of specifications for a phototransistor, not a photodiode. Page 4,5 describes specifications compatible with a photodiode Perhaps HUIYUAN collated these datasheet pages incorrectly, incorporating page 3 from some other source. Pages describing a "photodiode" outnumber pages describing something else. But voting in this case is a dubious metric. Maybe suitable for hobby purposes. The test of measuring open circuit voltage is a satisfying one to me, for a two-terminal photodiode device. Use a voltmeter having high internal resistance. For example, many digital multimeters have 10 Mohms input resistance, whereas analog multimeters have far less, perhaps as low as 10kohms. Illuminated by sunlight, or light from an incandescent lamp at close range, you should see some DC voltage, but no more than about 0.6V. The more positive terminal is the photodiode anode. Be aware that this test will not discriminate a LED from a photodiode - both LEDs and photodiodes can convert light to photocurrent.
H: Assistance identifying an op-amp marked "H2P IFA"- includes photos I'm looking for help identifying this chip, which I believe is a dual op-amp (pin 4 is grounded, pin 8 is power). Package appears to be QFN, possibly with a pad (there are no vias under the chip). I suspect an op-amp because of the application (analogue sensor signal conditioning board) with several other op-amps. If it is an op-amp, it will be a low-noise, low-offset, low-bias variety given the instrumentation application. I've tried looking up the marking "H2P IFA" using a variety of reverse lookup tables without success (H2P returns some 5-pin devices). I've said that the second line of marking is "IFA", but the "I" is curiously-wide. Under the scope it looks vaguely like a cartoon pine tree or a gummy bear (see images). Any assistance would be appreciated! AI: Could be Analog Devices ADA4896 series. Single channel rail-to-rail OP. Marking H2P. Ground on pin 4, supply on pin 7 (pin 8 is a /Disable pin and could be tied to Vdd probably). And this might be what they call a "LFCSP-8" package aka MO-229, with a pad underneath the chip indeed (I'd probably call it "QFN" too).
H: Why do different multimeters give me very different current measurements? I was trying to calculate R_TH using an open-circuit voltage and a short-circuit current, and I am having trouble with the current measurement. I noticed that my calculated R_TH didn’t match my measured value. I’ve tried 3 different multimeters and on all of them, my voltage and resistance values are consistent, but my current values are all completely different. I checked the fuses and there doesn’t seem to be anything wrong with the multimeters. Any idea on what I’m doing wrong? The cheapest multimeter is the only one that appears to give a valid current reading (based on the voltage and resistance values). I got 0.96 mA with a DT830B multimeter (unsure of the brand), 1.3 mA with a Mastfuyi FY8233E multimeter, and 2.47 mA with a Klein MM400 multimeter. The reason I don't have the blue one on the 60 mA setting is because it was giving 14.88 mA, which was wayyyy off. I don’t know why it was doing that either . I’ve attached images below of the multimeters I used: AI: These multimeters measure current by actually measuring voltage a cross a resistor through which the current passes, and applying Ohm's law to calculate and display the corresponding current. Here is what you think you're doing, when measuring short-circuit current: simulate this circuit – Schematic created using CircuitLab This is what you are actually doing: simulate this circuit The multimeter is inserting a 100Ω current shunt resistor RM into the current path, and measuring the voltage across it. It would make the following calculation, and display the result: $$ I = \frac{V_M}{R_M} = \frac{833mV}{100\Omega} = 8.3mA $$ We are taught that ammeters are supposed to have zero resistance between their probes, not 100Ω as I've used here, but sadly that's not the case. Consequently, the multimeter displays only 8.3mA, not the expected 10mA. I used 100Ω as an example. Different multimeters will use different values, and this resistance also changes depending on the range you selected on the meter. You'll likely get more accurate results (at the cost of precision) by making the measurement on a higher current range, rather than milliamps, because the shunt resistance will be commensurately smaller (by perhaps an order of magnitude) for higher current ranges. Bear in mind that it's perfectly acceptable, and quite common practice, to perform current measurements in exactly the same way that the multimeter did, but using a shunt resistor of your choice. In the examples above, I can get near 1% accuracy by using a 4.7Ω (1%) resistor that I get out of my component drawer, placing that in the current path, and measuring the voltage across that instead; ironically, using the same multimeter in millivolts mode!
H: DC motor control algorithm with low sample rate I have a system driven by a geared DC motor spinning at between 500 and 1000 RPM. Once in operation, the load on the motor will not change and the speed should be held constant. I have a single digital Hall-effect sensor sampling the output of the system once per rotation. Most PID controllers I’ve seen have much higher sampling rates. Will a traditional PID or PI controller work here? Are there alternative control algorithms I could use? Additional context: The motor is driving a linearly reciprocating mechanism. The load will vary within a cycle, but the average load per cycle should be constant during operation (i.e. no external forces acting on the system aside from friction and air resistance). Speed variation should be less than 1%. I can’t place more than one sensor due to space restrictions and a limited number of available IO pins on my microcontroller. AI: Discrete-time PID will do it. Sampling rate doesn't matter all that much, as long as there is no aliasing. In your case, the sensing point is always at the same mechanical angle, so it's probably fine. The upper end of the regulator bandwidth will be pretty low - a couple of Hz, say 2..4Hz. Most likely you won't need the D term unless there's a structural resonance, and even then you'd only want to enable that term within the RPM range of structural resonance. Do mind, though, that for stability you may end up interpolating between 2-3 sets of P,I,D coefficients, based on RPM. Say that there's a structural resonance at 750RPM. You'd do one tuning at 500RPM, call it PID500, another at 1000RPM, PID1000 and then at 750RPM where the resonance is - PID750. These parameter sets would be blended, using a set of blending functions, for example (Mathematica output):
H: How to get megahertz frequency with a 555? I need to produce a 1.7 MHz signal. Up to around 200 kHz the 555 IC and my simple astable circuit are working just fine as expected, but when I decrease R1 and R2 to less than 100 Ω the frequency doesn't increase more than about 200 kHz. I tried reducing C, but got the same result and it seems somehow I can't get frequencies beyond 300 kHz. Is there any limitation for higher frequencies with the 555? Any suggestions? AI: The datasheet from TI for NE555 says near the top of page 9 that it best works from "from < 1mHz to 100 kHz". This is the suggested frequency. You shouldn't expect it to work reliably outside this range. Based on if you are using an original one from TI or a duplicate, it might have higher or lower range of operation. I would say you let go of the idea to run a 555 at 1.7 MHz and use an RC ring oscillator using 7404 or 7400. There are many similar variants if you search "not gate RC oscillator". Use simulation to get rough values for R and C, then use a variable resistor to tune the frequency to your desired value.
H: How to boost the gain of this comparator? I'm trying to implement a comparator for ADC. The schematic is shown as follows. Image source: S. Lan, C. Yuan, Y. Y. H. Lam and L. Siek, "An ultra low-power rail-to-rail comparator for ADC designs," 2011 IEEE 54th International Midwest Symposium on Circuits and Systems (MWSCAS), Seoul, Korea (South), 2011, pp. 1-4, doi: 10.1109/MWSCAS.2011.6026511 (link) My question is: if the gain of this opamp (it also looks like an OTA to me) is large enough, because I simulated this circuit and the gain was only 20. I'll explain my understanding of the circuit below: Suppose gm1=gm2=gm, and the input is a differential signal Vd. Then the gain from node Vo2 is gm times the impedance at node Vo2 (call it Ro2). Ro2 = 1/gm6 || Rout4 || Rout2 (the impedance at the node Vo2 is the parallel of the output resistance of M4, M6, and M2). The gain is even less than a single stage amplifier, because of the impedance in parallel. Is my analysis correct, and how to use this topology as a comparator? AI: The circuit shown seems not to be a complete differential input stage, but just an illustration of how symmetric current mirrors could be set up. M1 and M2 are stand-ins for the rest of the input stage that you’d need to design. M3 and M6 have inherently low drain impedance, so attaching a differential stage directly to them would make little sense if high gain is desired. At minimum, you’d want a high impedance cascode stage between the current mirrors and the input devices M1,M2. That cascode then has to be suitably biased. Various self-bias methods may be of interest. A differential stage may benefit from gm doubling, both in terms of higher gain but also of layout more amenable to thermal compensation. Someone who actually designed such things would surely have much more to say about it. I’m just reading papers and playing on breadboards at the moment.