text
stringlengths
83
79.5k
H: What is the importance of dc-dc booster efficiency? I am going through Texas instrument web bench design suit and get 40% efficiency for some booster circuits if booster can handle the load (Vout and Iout) according to design given by TI what is the disadvantage of having 40% efficiency over 70% or 85% efficiency design? is the booster unstable? will bolster fail at some time (within calculated load range can there be a voltage drop after some time?) will booster waste energy as heat? Energy loss and heating is OK for my application. AI: if booster can handle the load (Vout and Iout) according to design given by TI what is the disadvantage of having 40% efficiency over 70% or 85% efficiency design? It'll get hot, and waste heat. will booster waste energy as heat? Yes. You can figure this out using the first law of thermodynamics: you have a system into which electrical energy is going, and out of which is coming electrical energy. There's 2.5 times as much electrical energy going in as coming out -- that means the other 1.5 times is coming out as heat, because there's really no other significant energy-shedding mechanism. is the booster unstable? Not necessarily. Efficiency and stability aren't necessarily related. will bolster fail at some time (within calculated load range can there be a voltage drop after some time?) That depends. For every watt you put in, you'll get out 400mW, and 600mW will get burnt up as heat. If your booster's thermal design (meaning -- if your circuit isn't big enough) isn't up to shedding that heat, it'll get too hot and burn up.
H: Three CMOS Inverters with feedback I have been working on this circuit many days. I just would like to know how it works. In my opinion, the key is in the feedback in the 2º CMOS, but I do not really know how to manage it. All the information I have about the circuit is in the figure. AI: The circuit is essentially this: simulate this circuit – Schematic created using CircuitLab Obviously, the outputs of NOT1 and NOT3 will conflict with each other at times. The point is that the transistors of NOT1 are large enough to "overpower" the transistors in NOT3 when necessary. Basically, the output of NOT3 functions as positive feedback for NOT2, tending to keep it in the state that it's already in. This creates hysteresis in the transfer function of the overall circuit, also known as a "Schmitt trigger".
H: VHDL: Can't give a std_logic type a value of 0 or 1 (on or off) I have the following code from one of the files in a project: LIBRARY ieee; USE ieee.std_logic_1164.ALL; USE ieee.numeric_std.ALL; entity twoplayermux1 is port ( --inputs: show : in std_logic; set : in std_logic; input : in std_logic_vector(7 downto 0); try : in std_logic; player : in std_logic; --outputs: show_out : out std_logic; set_out : out std_logic; input_out : out std_logic_vector(7 downto 0); try_out : out std_logic; show_out2 : out std_logic; set_out2 : out std_logic; input_out2 : out std_logic_vector(7 downto 0); try_out2 : out std_logic ); end twoplayermux1; architecture Behavioral of twoplayermux1 is begin process (show,set,input,try,player) begin case(player) is when "0" => show_out <= show; set_out <= set; input_out <= input; try_out <= try; when "1" => show_out2 <= show; set_out2 <= set; input_out2 <= input; try_out2 <= try; when others => null; end case; end process twoplayermux1; end Behavioral; However, because player is an std_logic, the compiler gives me the following error: Error (10515): VHDL type mismatch error at TwoPlayerMux1.vhd(36): std_logic type does not match string literal The error is in the case statement where I wrote when "0" => and when "1" =>. If I change playerto an std_logic_vector of 2 bytes and write when "00" => or when "01" => then it is being compiled without any errors. However, I do not want a std_logic_vector of 2 bytes. It needs to be 1 byte like so: player : in std_logic_vector(1 to 1); , but that doesn't work either. NOTE: It is a project for an FPGA board. AI: You have to options: Use when '0' , with player as std_logic Use when "0", with player as std_logic_vector(0 downto 0)
H: AGND and GND - Grounding in Schematics I have seen a few electrical schematics where there are separate symbols for grounds. "Agnd" and "Gnd" Can someone help me understand what are the differences between those two and where to use "Agnd" and "Gnd" ? And it would really help if you could provide an answer relating to the Common Mode Choke component (as I have seen the different grounds in power sections having CMC) Please help in very basic electrical terms. AI: AGND and GND are generally "Analog Ground" and "Everything Else Ground" The general reason they are separated is that AGND is the reference to sensitive analog circuits (amplifiers, ADCs, DACs, etc...) and GND is the reference for general digital circuits (gates, shift registers, MCUs, CPUs, FPGAs, etc...), which can impart a lot of switching noise onto their grounds. Now, you want your grounds at the same DC potential so your data converter (just an example) has the same potential analog and digital grounds (analog ground for analog reference, digital ground for communications reference), which is where the CMC (common mode choke) comes in. The CMC has essentially zero resistance at DC (just the DC resistance of the choke), but has a high resistance to AC or frequency components, like switching noise. So by separating your grounds with a CMC you get two grounds at the same DC potential, but you don't get digital switching noise affecting your sensitive analog circuits and/or measurements. Hope that helped!
H: Effect of submerging a laptop in isopropyl alcohol without removing batteries It's common knowledge that water damage on many electronic devices can be mitigated using isopropyl alcohol at a high concentration. Many guides have mentioned disconnecting the battery before using isopropyl alcohol on the device. It can be tricky to remove the battery of certain electronics devices (such as many apple laptops), so I was wondering whether someone could please inform me about the consequences of submerging the body (not the screen) of a laptop that has been powered down in a tub containing 99.9% isopropyl alcohol without unscrewing any components (i.e. leaving the lithium-ion batteries inside the device)? The amount of isopropyl alcohol will be just enough to cover the keyboard. AI: Pure isopropyl alcohol is not conductive...but minor contaminants (such as those that may be found on a PCB board after manufacturing) will cause it to become conductive. I would not do it ever. As for consequences...explosive lithium fires. If I really did not want to remove the battery, I would make a makeshift vacuum chamber to try and dry it out.
H: Signal to Noise Ratio and Inflection point (PSRR measurement in Oscilloscopes) I have a 3.3V LDO for which I am supposed to do the PSRR measurement. I have a keysight scope to do this. While looking at this PSRR YouTube tutorial from Keysight, I have a few doubts. Around 5:20sec, he tell that there is wiggle in the waveform due to the Signal to Noise ratio. What is the signal here and what is the noise? Is that 200mV(p-p) which is mentioned before during the test setup the signal or the noise? Please clarify which is the signal and which is a noise? He also mentions that it can be corrected by increasing the amplitude above 200mV. How is this possible? Around 10:00sec, he mentioned that there is something called as inflection around 2.5MHz. Can someone help me understand what is inflection that he is talking about and why does it occur? Please. I have tried reading/researching a lot before coming here seeking answers. Please help me understand AI: he tell that there is wiggle in the waveform due to the Signal to Noise ratio. What is the signal here and what is the noise? The scope has a wave generator which injects signal into the LDO input via the Pico accessory. Then one scope channel probes the LDO input (10:1 probe) and the other channel probes the output (1:1 probe). It automatically generates signals over a range of frequencies, then measures the PSRR which is Vin/Vout in dB. So if it injects AC signal into Vin and creates a 100mV ripple on the input, and there is a 10mV ripple on the output, then the PSRR is 100/10 = 10 = 20dB. The ratio is between AC amplitudes, ignoring DC. What it displays at 5:20 is not the actual signal, it is the PSRR versus frequency curve. I don't know what algorithm the scope uses to compute the AC amplitudes before it does the calculation to get the PSRR, but if it is simply RMS (which would make sense) then any extra noise generated by the scope itself or the LDO will also sneak into the result and manifest as noise on the PSRR graph... ie, as wiggles. That's what you can see on the graph. Now, since PSRR measurement is Vin/Vout, trouble can come from noise on Vin, or noise on Vout. Usually the AC voltage on Vin is large enough to be properly observed, but if the LDO has good PSRR, AC output voltage will be tiny so it can disappear in noise, especially if the LDO has good PSRR but it is noisy. He also mentions that it can be corrected by increasing the amplitude above 200mV. How is this possible? As long as the LDO operates in linear fashion, increasing AC amplitude at the input also increases AC amplitude at the LDO output, so the signal we want to measure is larger, better signal to noise ratio results in a cleaner graph. However, increasing input voltage ripple too much may cause the LDO to behave in a nonlinear fashion (he shows this around 7-8 minutes) which is not what you want. PSRR is supposed to be a small-signal, linear measurement. When input AC voltage is high enough to cause non-linear distortion on the output, this is no longer small-signal conditions, and while something will be measured, it will no longer be "PSRR". You could call it "RMS AC voltage on the output with such and such AC amplitude at the input"... but not "PSRR" since the "R" means "Ratio" and thus linear conditions. Around 10:00sec, he mentioned that there is something called as inflection around 2.5MHz. Can someone help me understand what is inflection that he is talking about and why does it occur? It's the dip in the curve inducated by the yellow triangle. It is either due to the output capacitor self-resonance, or it is the point where the output impedance is no longer dependent on the LDO, but just on the output cap.
H: DC-DC PCB power plane on inner layer I am currently designing a 6 layer board that has a couple of DC-DC converters on it. The largest one is a buck that transforms 12V to 5V with an ouput current of max 7A. Layer 2 would be GND, layer 3 power. The others are signal/GND. Is it good pratice to put the power plane on an inner layer (layer 3)? If so, is it possible to place the input capacitors like this (only top layer shown): EDIT: Here is a bigger picture of the Buck. Only a draft, not all components are connected yet, but you get the idea. Also only top layer shown. I place the input capacitors this way to save some space on the PCB. Note, before the buck, there is a battery charger, which also has output capactors. AI: Power planes on the inner layers is very common. I don't think you would ever place a power plane on an external layer. Some boards place ground planes on both external layers so the PCB shields itself. It doesn't matter if the caps are along the same track as the current going to the power input pins (think of superposition two separate circuits, AC and DC, not a single circuit that tops off the caps on its way to the IC). As long as it is connected to both power and ground planes as close as possible to the IC, that is more important. Also placing those power and GND vias close to each other (i.e. on the sides or under of the capacitors) allows the equal and opposite direction of current flow to help cancel out some of the inductance of the vias. See these two answers for more detail: Decoupling cap routing on a 4 layer PCB 4 layer board back bypass capacitor power plane connection guideline reason This is most relevant for high frequency decoupling caps where small amounts of loop inductance are more important (ironically such caps also tend to have the least space underneath them with which to place vias since they are physically smaller). The larger bulk decoupling caps target lower frequencies so small amounts of inductance don't matter so much. That's why they can be farther from the load being decoupled. They have the most space with which to place vias underneath them though so you can do it if you want, but the caps themselves have so much parasitic inductance (as well as loop inductance due to distance from the load due to their likely placement) that it doesn't really matter.
H: Should I use two 2:1 mux or one 4:2 mux? I am designing a circuit and I am not sure which solution would be more preferred in the design. I'd like to justify why I'd use two 2:1 mux instead of a 4:1, but I don't seem to be able to. To be more specific, it looks like I'd have option 00 01 and 10 in the 4:1 mux. In the 2:1 muxes, ill have 0 and 1 and then the output will go to another mutex, also with option 0 and 1. Option 0 will be the output from the first mux and then if it is 1 I will have a different output. Edit: People been asking for a diagram. PC is program counter. AI: Functionally, the 4:1 mux could be seen as superior in that it allows you to later add a 4th input if you so desire; meanwhile, you can simply connect C to both the 3rd and 4th inputs as Eugene suggested to obtain a functionally identical circuit to a pair of 2:1s. If the 2:1 muxes (muxs?) are individual ICs, then the 4:1 is also superior in terms of design simplicity; you only need to worry about powering one chip instead of two, and the layout is a simple 4-in, 1-out, rather than having to wire/route the output of the first 2:1 into the input of the second. There could still be reasons why two 2:1s would be preferable to one 4:1, but I can't think of any off the top of my head other than if the cost were lower or another party had specifically requested 2:1s. If you want to use 2:1s out of personal preference, though, you can by all means - they'll do the job perfectly well.
H: Feedback resistors and op-amp instability I am a building a circuit which has a differential amplifier part. To keep things simple I have simplified the circuit to only show the relevant part below. One thing from op amps which is still difficult for me is the selection of the feedback elements, that is, in what range should you choose the resistors. I have seen answers such as: high value resistors will cause issues due to bias current, while low value resistors require higher currents which lead to dissipation, etc. In the simulation below all the resistors have the same value and are swept from 1k to 100k in steps of 20k. The op-amp becomes unstable as R increases. Even at 10k! You already see some oscillations. I expect that this is related to the input capacitance of the op-amp. Is this the case? Or are there other factors here? What op-amp parameter should I look for in the datasheet that could give a stable output with high resistance values? And how can one predict based on the input signal what value might lead to instability? The input of the system: source with 3 µs rise and fall time. where x = swept from 1k - 100k with 20k steps with x = 10k AI: This particular op amp has a note in its datasheet on page 14 about the feedback resistor and how to choose components to ensure there is no oscillation: Feedback Components Because the input currents of the LT1213/LT1214 are less than 200nA, it is possible to use high value feedback resistors to set the gain. However, care must be taken to insure [sic] that the pole that is formed by the feedback resistors and the input capacitance does not degrade the stability of the amplifier. For example, if a single supply, noninverting gain of two is set with two 10k resistors, the LT1213/LT1214 will probably oscillate. This is because the amplifier goes open-loop at 6MHz (6dB of gain) and has 45° of phase margin. The feedback resistors and the 10pF input capacitance generate a pole at 3MHz that introduces 63° of phase shift at 6MHz! The solution is simple, lower the values of the resistors or add a feedback capacitor of 10pF or more.
H: What does 'shunt' mean in the context of amplifiers? So the term 'shunt' is used to mean the following according to the Wikipedia page https://en.wikipedia.org/wiki/Shunt_(electrical) In electronics, a shunt is a device which creates a low-resistance path for electric current, to allow it to pass around another point in the circuit. However the term 'shunt' seems to mean something else when describing a series-shunt amplifier or a shunt-series amplifier. So what does shunt mean in the context of amplifiers? AI: In the case of amplifiers with feedback, 'shunt' can mean to connect the input of the feedback network (e.g., a voltage divider) in parallel to the amp output... or the output of the feedback network to the amp input. I hope you know what 'in parallel' means? If the input impedance of the feedback network is high enough, it will not 'shunt' (divert current from) the amp output. That is why, I prefer to name this connection 'parallel'.
H: Rotary potentiometer angle to voltage relation I have a 10k rotary potentiometer which turns from 0° to 270°. I wire it to an Arduino with the middle pin (2) connected to an analog input (A0) and the other two pins (1 and 3) connected to 5V and GND. simulate this circuit – Schematic created using CircuitLab I know the potentiometer is a voltage divider between pin 1, 2 and 3 as above so as expected I get 5 volts on pin 2 when the potentiometer is turned left (0°), 0 volts when the potentiometer is turned right (270°) and something in between in the middle. The problem comes when I try to calculate the angle - voltage function. I expected that it would be a linear function that can be calculated from the two given points (0°, 5V) and (270°, 0V) but when I turn the potentiometer to 135° which is supposed to be the middle point I don't get 2.5 volts instead I get something around 1.5 volts. I also noticed that when close to 270° a single ° in rotation corresponds to a bigger voltage change than when closer to 0°. So is this normal or is my potentiometer busted? And if it is normal why is it so? EDIT I did some measurements and came up with the table below as Umar suggested Degrees | Ohms | Delta 0° | 3 | - 45° | 480 | 480 90° | 972 | 492 135° | 1704 | 732 180° | 3423 | 1719 225° | 7280 | 3857 270° | 10300 | 3020 So Kevin White is correct it seems that I have a tapered audio potentiometer and the function is logarithmic (the blue curve in Kevin's answer). Thanks, everybody for your time and effort. AI: It is almost certainly an audio taper version. They have a pseudo-logarithmic law that works better for audio applications because of the way that we perceive loudness. The A10K means audio taper potentiometer 10 kilo-ohms. One marked B10K would have a linear law that does what you expect. The picture was taken from this site Resistor Guide
H: Perf board parasitic capacitance fries microcontroller? Recently one of my mirocontrollers fried in a project on a perf board. Eight of the outputs are PWM, and aside from the power lines, there are no other outputs from the MCU. The PWM frequency is 40 kHz. All these output lines are routed to a connector with long, beefy solder tracks under the board. After going through the connector, each line meets a 2.2k resistor (series) and drives some transistor circuitry. Two of the PWM channels worked on their own for quite a while before connecting the other six, so I first thought my troubles could be the result of continuous overcurrent. The MCU is rated for 200 mA max, but 5 V/2.2 kohms * eight channels only comes out to 18 mA. I am inclined to say that the fast switching frequency combined with the beefy solder tracks pushed too much current in and out of the parasitic capacitance before the series resistors. How much parasitic capacitance can lead solder introduce? I realize I could put the resistors closer to the MCU BEFORE the long tracks, but there is always a degree of parasitic capacitance in everything. I would like to know if the following reasoning is correct: If I solder a wire to a large copper ball (say apple-sized) and then plug the end of the wire onto my MCU PWM output pin, I will fry the unit as high current quickly rushes in and out of the ball's parasitic capacitance. If I did the same thing with just a bare wire, the capacitance would not be great enough to fry the controller. I have done several times in projects with no issues. If the reasoning is correct (I would not like to test it :)) and all capacitors can draw huge momentary current, why does the ball fry the MCU and the wire doesn't? Is there a calculation to help me understand this better? AI: I also doubt parasitic capacitance is the issue here. I simulated a circuit with 100pF to ground (a huge amount) and a faux mosfet gate termination. and only saw ~5mA though it with rise/fall times of 100ns. If your rise times are much faster you will see more current, but according to this post: How solder inductance affects RF circuits? you'd be lucky to get significantly more than 1-5pF on the line. Huge cap on input for testing's sake. Current through cap Do you have schematics/pictures of the circuit? is it possible that you shorted two wires and that's why you draw so much current?
H: Getting the wrong Thevenin equivalent resistor Its been a while since last time I worked with circuits. Im trying to find the Thevenin equivalent resistor to this circuit. This is how I tried solving it. The correct answer according to the solutions manual is 3 ohms. What did I do wrong? AI: There are lots of ways to proceed on the problem. Your circuit example can be addressed by what's usually learned early -- before mesh or nodal analysis. So I'll start with the basic Thevenin to Norton and Norton to Thevenin conversions. Let's start. Step 1 Perform a Thevenin to Norton conversion on the left-side two components: simulate this circuit – Schematic created using CircuitLab I think you can see how this was achieved, if you are at all familiar with this transformation. Step 2 Take the two resistors in parallel and combine them: simulate this circuit Again, this is pretty basic and I'm sure you've already become familiar with this. Step 3 Perform a Norton to Thevenin conversion on the left-side two components: simulate this circuit Again, I think you are familiar with this conversion. Step 4 Take the two resistors in series and combine them: simulate this circuit Step 5 Perform a Thevenin to Norton conversion on the left-side two components: simulate this circuit Step 6 Combine the current sources: simulate this circuit Step 7 At this point, just convert back to Thevenin. Here you will get \$V_\text{TH}=125\:\text{V}\$ and \$R_\text{TH}=25\:\Omega\$. simulate this circuit Another approach is to perform nodal analysis and later inject a \$1\:\text{A}\$ source. So, for example: simulate this circuit The nodal equations are: $$\begin{align*} \frac{V_\text{X}}{2\:\Omega}+\frac{V_\text{X}}{2\:\Omega}+\frac{V_\text{X}}{24\:\Omega}&=\frac{100\:\text{V}}{2\:\Omega}+\frac{0\:\text{V}}{2\:\Omega}+\frac{V_\text{TH}}{24\:\Omega}\\\\ \frac{V_\text{TH}}{24\:\Omega}&=\frac{V_\text{X}}{24\:\Omega}+3\:\text{A} \end{align*}$$ Which solve out as: \$V_\text{TH}=125\:\text{V}\$ and \$V_\text{X}=53\:\text{V}\$. Now, inject \$1\:\text{A}\$ into the (+) node. The new nodal equations are: $$\begin{align*} \frac{V_\text{X}}{2\:\Omega}+\frac{V_\text{X}}{2\:\Omega}+\frac{V_\text{X}}{24\:\Omega}&=\frac{100\:\text{V}}{2\:\Omega}+\frac{0\:\text{V}}{2\:\Omega}+\frac{V_\text{TH}}{24\:\Omega}\\\\ \frac{V_\text{TH}}{24\:\Omega}&=\frac{V_\text{X}}{24\:\Omega}+3\:\text{A}+1\:\text{A} \end{align*}$$ And the new results are: \$V_\text{TH}=150\:\text{V}\$ and \$V_\text{X}=54\:\text{V}\$. This means that \$R_\text{TH}=\frac{150\:\text{V}-125\:\text{V}}{1\:\text{A}-0\:\text{A}}=25\:\Omega\$. So, \$V_\text{TH}=125\:\text{V}\$ from the open-circuit case and then \$R_\text{TH}=25\:\Omega\$, as derived from changes due to the injection of a \$1\:\text{A}\$ current. So the results using nodal analysis confirm the multi-step process used earlier here.
H: Why are two-port networks able to use a dependent source but not an independent source? A two-port network can be modeled as black box with two-port parameters provided the two-port network consists of no internal independent sources. This restriction does not apply to a two-port network with dependent sources, why the difference as both are sources? AI: This restriction does not apply to a two-port network with dependent sources, why the difference as both are sources? When you define a two-port, you're saying the network will obey a rule of the form $${\rm outputs} = \left[ {\rm matrix} \right] {\rm inputs}$$ Where the inputs are one set or linear combination of the port voltages or currents, and the outputs are the remaining voltages or currents, and the matrix contains whichever network parameters we chose (Z-parameters, H-parameters, S-parameters, ...) This means if the inputs are 0, then the outputs will be 0. If there is an independent source in the network, then this wouldn't be true and our matrix description of the network wouldn't be valid. However if there is a dependent source in the network, then its output will be proportional to some linear combination of the input signals, so it can be readily handled with the network matrix description.
H: Crimp directly to PCB (or holes in PCB) I know there are certain SMD components which can receive a Dupont male connector. I know there are ways to solder male and female connectors to a board directly. As part of a manufacturing technique, I'm wondering if there is any way on the market to connect wires to a board, without soldering of any kind. So you might crimp some steel piece to a wire, then that steel piece would go into a through hole in the PCB, where it is ergonomically crimped a second time. Is there any sort of system in place for connecting wires to boards without soldering? I am looking to build a machine that can do this. AI: You did not provide any requirements, and it does not look like you've done any research. Nevertheless, there are several options available. For example for low power you can use so called "card edge" connectors, like EdgeMate series from Molex. For high currents there are Han-Fast Lock terminals from HARTING. But really, if you consider the cost of connectors, crimping tools and contact reliability, your best option will probably be a simple and dirt-cheap ring terminal either bolted or riveted to large circular pad on PCB. Note, that this is not the best option in general, I would still go for good ol' screw terminal. A bit of soldering, but beats the rest in convenience and peace of mind.
H: ADE7758 Power supply design I am designing a power meter based on ADE7758 and I'm using the official documentation for examples; I will also use an isolating IC for the communication between the ADE7758 and the microcontroller. I have some trouble understanding why is the ground of the +5V supply referenced to the neutral line in the diagram below: The diagram is adapted from page 12 of AN-564 Application Note of Analog Devices and depicts a power supply design for a power meter. The ADE7758 datasheet sheds some light on connection between digital and analog grounds, saying that these should be connected at only one point (page 9). What bothers me is that the voltage regulator's ground is supposed to be 0V for the digital circuitry, while the neutral rail may not be a good reference. I have not tested the circuit and before I go ahead and do that I would like to wrap my head around its working principle. My other concern is what will happen if someone swaps the hot and neutral connections of the board; the microcontroller reference will be connected the the hot line and I want to avoid that. AI: As a bare minimum the neutral needs to tie into the ground of the Metering Chip because of the need for the chip to also connect to the AC line hot side for the AC voltage monitoring. See circled areas in the diagram below. Without it the voltage measurement would be like trying to use a digital multimeter with just one lead. Picture Source (page 12) Note that the reference board power supply showing connections of both sides of the transformer to the same side of the AC line is an unfortunate documentation error. In many situations this type circuitry may very well have its power supply directly derived from the AC line without using the transformer coupled linear 7805 design used for the reference board. In a single phase design the circuit would work just fine if the HOT and NEUTRAL were swapped. The product packaging and usage would have to take care of the safety aspects of this. On a three phase design on the other hand the NEUTRAL connection must be properly comprehended versus the three phase lines. Once again safety considerations are extremely important. Final note: When working with any line attached circuitry at the design and engineering stage great care needs to taken to stay safe and not make any stupid mistakes.
H: Proportional fair scheduling I came across this section in the paper Providing quality of service over a shared wireless link, IEEE Communications Magazine: Consider a very simple system with two users. The service rate (in a time slot) for user 1 is either 76.8 kb/s or 153.6 kb/s with equal probabilities 0.5. For user 2 the rates are 153.6 kb/s or 307.2 kb/s, also with equal probabilities. (Thus, the channel quality of user 2 is better on average.) Assume that channels for both users are independent, and there is unlimited amount of data to transmit to each user. Then, a “naive” round-robin allocation of time slots to the users will result in users served at the average rates R1= 0.5 x(0.5 x 76.8 + 0.5 x 153.6)= 57.6 kb/s and R2= 0.5 x(0.5 x153.6 + 0.5 x307.2) = 115.2 kb/s, respectively. Consider another scheduling scheme where a user with a “relatively better” (in a current time slot) service rate (153.6 kb/s for user 1 and 307.2 kb/s for user 2) is scheduled. In case of a “tie” (i.e., if channels are relatively better or relatively worse for both users), the user to serve is chosen randomly with equal probabilities 0.5 . A straightforward calculation shows that with this scheduling, the average rates are R1 = 67.2 and R2 = 134.4 kb/s, which is 16 percent higher for each user than with the round-robin discipline. This is an example of proportionally fair scheduling. I'm trying to figure out the rates (marked in bold) that have been obtained for the proportionally fair case. As per my understanding, user 1 would get selected over user 2 only 1/8th of the time (because with probability 0.5, user 1 sees a rate of 153.6 kb/s, which can get chosen over user 2 only 1/4th of the time since user 2 can see a rate of 153.6 with probabilty=0.5 and then user 1 is selected randomly with another probability of 0.5). Then, R1 = (1/8)x(0.5 x 76.8 + 0.5 x 153.6) = 14.4 kbps and R2 = (7/8)x(0.5 x153.6 + 0.5 x307.2) = 201.6 kbps. What am I missing here? This seems to be a simple math problem, but I think I've not understood the concept of proportional fairness correctly. AI: I see how you came up with your result -- it is a different interpretation of the phrase "relatively better." It does not mean relatively better than the other user, but relatively better for the same user. To calculate the speeds, look at a given time slot and randomly select a speed for both user 1 and user 2. One of four cases should occur, with equal probability (since speed selection is 0.5 for each): User 1 at 76.8 (low for U1), user 2 at 153.6 (low for U2). Since both are "relatively low", choose between the users with equal probability. User 1 at 153.6 (high for U1), user 2 at 153.6 (low for U2). Always choose U1. User 1 at 76.8 (low for U1), user 2 at 307.2 (high for U2). Always choose U2. User 1 at 153.6 (high for U1), user 2 at 307.2 (high for U2). Tie, so choose between the two with equal probability. The resulting average speed is then: $$\text{User 1 speed} = \frac{1}{4}\left(\frac{1}{2}\cdot76.8+153.6+0+\frac{1}{2}\cdot153.6\right)\text{ kbps} = 67.2\text{ kbps}$$ $$\text{User 2 speed} = \frac{1}{4}\left(\frac{1}{2}\cdot153.6+0+307.2+\frac{1}{2}\cdot307.2\right)\text{ kbps} = 134.4\text{ kbps}$$
H: Which transistor to drive 24V door strike with Raspberry Pi? I want to drive a door strike at 24V with a Raspberry Pi. The door strike needs 150mA. I think the current that a Raspberry Pi can drive on one pin is significantly weaker than for example an Arduino, so I really don't want to pick the wrong transistor. Transistor data sheets still often sound like black magic to me. I was previously planning on using a ULN2803A darlington array, but since I really only need the one output that would be overkill. I'll add a flyback diode as well since I don't know exactly what the mechanism inside the door strike is, it could be a coil/solenoid of some sorts. AI: Your darlington array would work fine. No problem with 'overkill' if you have it on hand and are just trying to make something work. If your load is really only 150mA, you have many options. I would personally recommend a 'logic-level' n-mosfet as a low side switch, such as the NDS351N.
H: Design input of the 4 bit adder I have the answer. But i'm not sure how the input of the adder is derived. Why is it a,b,1,1 for x and 0,a,b,0 for y? Been searching multiple sources but i'm stuck. Trying to study for my exams. AI: The X input is equal to \$ab \times 4 + 3\$. The Y input is equal to \$ab \times 2\$. Add them together and you get \$ab \times 6 + 3\$.
H: Varistor in a DC circuit I'm employing a Varistor in a simple circuit that will be used in a car. The Varistor is there to ensure that any load dumps / vehicle transients will not result in catastrophe for the circuit. My question is this: Should I pass VIN through the Varistor, or should the Varistor couple VIN and GND? Circuit A has VIN passing through the Varistor: Circuit B has the Varistor tied from VIN to GND: AI: The fuse would always be before voltage-surge "shunt" clamp protection shown as Z_suppressor. The other series elements are the Z (resistance+inductance) of the source and lines. REFERENCE
H: does proportionality always works when analyzing DC motors? Let's consider this example: a Trolley that can reach 36kmph when there is lite traffic with a current of 700A, and when the traffic gets heavy, the speed decrease to 20kmph. now i want to deduct the current form these information. so my question is, would proportionality rules always apply for these kinds of analysis. for example the current would be: I2 = (20/36)*700 = 389A the problem with this approach, is that it contradicts with how DC motors work, which is when speed decreases, the current increases AI: Motor voltage or EMF is proportional only if no load. as RPM/V or kV/RPM Motor current is proportional to torque. Thus current will rise as load increases and voltage drops from a light load condition. Overall , power drain increases, so you can presume current rise is faster than voltage sag up to some point.
H: Current through relay I want to control AC power switch 230vAC, I used this relay type Relay OJE series SPST. According to the datasheet, it can drive V=5v and the resistance through the coil is 125 ohm I calculated to get the current through the coil by applying ohm's law and I have I= 0.04A. However, I have this diagram below: so from MCU the voltage is 3.3V and in series with 1k2 to get an Ib= 3.3-0.7/1k2 = 0.00216A. My question is how do I know this current can switch the relay? does that mean Ib is flowing to the relay current coil? or in other words, Is this relay type suitable? AI: However, I have this diagram below: so from MCU the voltage is 3.3V and in series with 1k2 to get an Ib= 3.3-0.7/1k2 = 0.00216A. ... how do I know this current can switch the relay? This depends on the BJT you choose. To get 40 mA collector current with 2 mA base current you need a \$\beta\$ value of at least 20. This should be pretty easy to find. But, in this circuit you generally want to operate your BJT well saturated, say with a saturated \$I_c/I_b\$ ratio of 10 or so. So you might rather reduce your base resistor value to get in the neighborhood of 4 mA into the base. does that mean Ib is flowing to the relay current coil? or in other words, Is this relay type suitable? No, the whole point of the BJT (as used here) is that it provides current gain between the base and the collector. The ratio \$I_c/I_b\$ in the linear operating region is sometimes called \$\beta\$ and sometimes \$h_{fe}\$, and will be specified on the BJT datasheet. Numbers from 40 - 400 are pretty typical for this parameter. As mentioned above, in this circuit you actually don't want to operate in the linear region, but in saturation. In this mode, the \$I_c/I_b\$ ratio drops, but you can still plan around a gain of 10 or so. If your micro really can't provide more than 2 mA from its GPIO pin, you can probably make this circuit work as you drew it, but you will want to choose a BJT with a \$\beta\$ spec of 100 or more, rather than one on the lower end with only 40.
H: Identifying a Diode: "1042 MZ" I am organizing a bunch of components given to me; some of them not put in a labeled bag. This SMB diode has the marking "1042 MZ" on it and I cannot find any references to this. And I have checked a SMD Code book without finding anything. I am running out of good ideas :) Edit: Applying 1V in forward direction made it hot and conduct 2A. Slowly increasing the reverse bias voltage with an LED in series confirmed that it is a unidirectional TVS diode starting to conduct at about 70 volts. AI: Possibly a generic version of the SMBJ51A, a 51V SMB unidirectional TVS.
H: Low pass filter giving sine wave ( triangular wave as input ) so i basically have a low pass filtering that is filtering a square wave and giving me in output : As far as i know the square wave can be made with the sum of multiple sine waves So the over/undershoot i have is due to the frequencies that are not being filtered. The triangle wave i have is this one, i want to know if the same logic can be used to understand the output of the filter AI: No, you're not seeing the Gibbs phenomenon on your triangle wave (although it would happen, if it were slower and had a period that's an integer fraction of your sampling rate). What it appears that you've done is to generate a sine wave that is quite fast with respect to your sampling rate. That is why it doesn't appear to be a perfect triangle wave. Moreover, it is not a low-order integer ratio of your sampling rate (i.e., it's not exactly 2/7, but it appears to be close). Because the ratio isn't exact, you're generating aliases of the higher-order harmonics -- this is why your "triangle" wave appears to be riding on a sine wave. Your low-pass filter, acting in sampled-time, is filtering out the rapidly-moving "triangle" part, and passing the low-frequency sinusoidal, which is genuinely there as a consequence of the aliasing.
H: Hi fi speaker impedance I'm currently working in a charity shop. We have a range of hi-fi units, radios, CD players etc. We also have a range of speakers to go with them. Some are marked with their nominal impedance, others are not. I am generally able to determine the recommended speaker impedance range for a particular hi-fi either because it is marked on the unit or I can find the specification online from the make and model number. In many cases however there are no markings on the speaker to help me. These speaker units are as supplied with a particular hi-fi so may contain more than one raw speaker together with associated internal crossover circuitry. In some cases I know the manufacturer of the hi-fi as I have say a 'Sony' label on the speaker unit but no indication which model it was intended for. Is there a simple way for me to measure this? I am assuming it is not as simple as the DC resistance. I can imagine placing a resistor in series with the loudspeaker and a small, say 1 volt, signal generator across the pair by measuring the, true AC rms, voltage either side of the resistor I can calculate the impedance. Is this the best option or is there a simpler one? If this is the best option what frequency range would I need to measure over and which figure should I take? The intention is that the charity shop can sell a second hand system with suitable speakers even if not the ones that were originally part of the system. AI: The impedance of a loudspeaker depends on frequency, crossover network, how many drivers it has, etc. If you measure it with an ohmmeter, you'll get value which will be the DC resistance of the woofer, plus any coils in the crossover lowpass filter which are in series with the woofer. A 8 ohms woover will usually have 6-7 ohms DC resistance, add 1 ohm for the crossover coil and wiring, so you'll probably get 7-8 ohms on your multimeter. Maybe 6 ohms if the woofer is unfiltered, or if this is a one-driver speaker without filter. If you get half that, it's a 4 ohms speaker. That's a bit rule-of-thumb-y but it'll work in most cases.
H: Low duty cycle 12VDC to 120VAC inverter I need to operate a small 120VAC motor for short bursts of 1 to 3 seconds using a 12VDC supply (it's for a remote antenna switch). Operations are typically minutes to hours apart, so the duty cycle is very low. I don't know the precise power draw but it's likely in the 10-15 watt range, so not very great. A typical inverter seems like overkill where the average power requirement is so small. In my mind I picture some caps storing energy for the output pulses, but I can't picture a nice implementation. Any suggestions for a simple and physically compact circuit that could handle this? Thanks! AI: If your motor can tolerate a square wave, it is fairly easy to create a push-pull driver connected to a center-tapped transformer. You can use a transformer normally used for step-down reversed as a step-up. I successfully drove my telescope synchronous motor with a DIY circuit as I described. Sorry, I don't know where the schematic is. Edit: Here is a circuit that will create a modified square wave (less harmonics, better than what I built). This is untested, you shouldn't attempt this unless you have some experience. simulate this circuit – Schematic created using CircuitLab The output should look like this:
H: How to synchronize readings with the HIGH and LOW states of a PWM output? I would like to oscillate GPIO infrared LEDs at a high frequency and compare the infrared sensor readings when the LEDs are off and when they are on. To do this I would need to synchronize the readings with the PWM. Currently, I can use a pseudo-PWM by using digitalWrite to toggle them between HIGH and LOW with a delay to take readings, but this only give me 500Hz. How can I synchronize readings with a PWM output? Or should I just use another method of digital on/off? AI: Generally you'd probably want to use the PWM peripheral on the MCU chip and generate an interrupt from the PWM peripheral, then make the ADC measurement (or at least trigger it to begin) in the interrupt service routine. The details, such as which PWM mode to use, and how well it will work depends a lot on the chip. Worst case, you could wire the PWM output back to an interrupt input. You may also need a delay to allow the ADC input to settle after the LED state changes, so perhaps another timer would be involved depending on exactly when in the PWM cycle the interrupt is generated (check out phase-correct PWM if you're using an ATmega328). Your use of the term digitalWrite implies you're trying to do this with some Arduino or Arduino-like module, so you'll have to study the underlying chip (eg. Atmega328) features and how to use interrupts. You may be able to get more specific help in the Arduino forum.
H: Why does a two-port network have two degrees of freedom? According to the following lecture https://www.youtube.com/watch?v=1dgdiws3Kl8&t=817s. A two-port network such as the following can be expressed as the following simultaneous equation $$a_{1}V_{1} + a_{2}V_{2} + b_{1}I_{1} + b_{2}I_{2} = 0 $$ $$a_{3}V_{1} + a_{4}V_{2} + b_{3}I_{1} + b_{4}I_{2} = 0 $$ However, this implies two degrees of freedom so two independent variables but I struggle to understand why, given when I apply an input voltage it is the only independent factor and all else is dependent on the input voltage? AI: I apply an input voltage it is the only independent factor and all else is dependent on the input voltage? You have to also consider what you connected to port 2. If you connected a voltage or current source, that obviously also sets one of the variables on port 2. If you connected nothing at all, that means you set \$I_2=0\$. If you connected a short, then you set \$V_2=0\$. If you connected a resistor, you set \$I_2 = -V_2/R\$, which doesn't force either variable to anything in particular, but it does remove one degree of freedom from your original equations.
H: Designing a circuit with a "short-lived" switch with 555 IC i am very new to electrical engineering and designing circuits and there is a particular project that i am working on that involves using a switch and a 555IC timer. So basically my desired output logic is as follows (Note: I am using 2 different LED, green and red and an infrared sensor that acts as a switch) 1) Using an infrared switch, when a object is nearby, it closes the switch which starts the 555 timer. (Green LED lights up) 2) 555 timer counts down 5 seconds which triggers the 2nd 555 timer which then causes the red LED to light up. 3) Green LED turns off and 2nd 555 timer counts down 10 seconds and then RED LED switches off and the circuit resets. (where both green and red LEDs are off) (For this circuit diagram please ignore Timer #1 and #2) However, my circuit only works if the "start" is closed then opened immediately (i.e i close the switch and then open again). If i allow the switch to be permanently shorted, the green LED will be lighted and it does not trigger the 555 timer. Hence, I am wondering if there is a certain kind of modification i can make to the infrared sensor that acts like a "button" that can close the switch and then open it immediately to trigger the 555 timer even if the object is still near the infrared sensor. AI: I think you are asking what can be added to the switch input (actually your IR sensor) to make it act like a single pulse trigger and not a continuously held switch. To do this you can just use the same method you are already using between your existing 555's, use a capacitive input, (see below). You may need to adjust the R1,C1,C2 values a bit to get the timing just right, and the original 33k pull-up might also need to be higher. Ultimately you could even include a buffer or one shot in-line with the first 555 to give you a better input pulse shape. If you look into 555 applications there should even be a circuit to use a 555 as a one-shot. As another alternative you could use a tri-state buffer gate and enable it with an output of one of your 555's. That arrangement would temporarily disable the input signal and create a single pulse, (see second circuit). simulate this circuit – Schematic created using CircuitLab
H: Transistor control through opamp In the given circuit I want to compute the current going through R3 given the current of Rs (Rs is a shunt resistor.) simulate this circuit – Schematic created using CircuitLab What is Iout in terms of Isense ? AI: This circuit consists of two cascaded dual circuits - current-to-voltage converter (Rs) and voltage-to-current converter (the op-amp, Q1 and R3). So it is a current-to-current converter... current amplifier... current attenuator or, if the output current is equal to the input one, the so-called current mirror. Simply speaking, the input current creates a voltage drop Vs = IIN.Rs across Rs and the op-amp follower copy this voltage across R3 by the help of the output (collector) current IOUT. So IOUT.R3 = IIN.Rs -> IOUT = IN.Rs/R3 = IIN/2000, i.e., the circuit is a current attenuator. The clever trick of this circuit solution is that the emitter follower (Q1) is put in the feedback loop and the op-amp compensates the transistor base-emitter voltage VBE. For this purpose, the op amp "lifts" its output voltage with VBE; so the emitter voltage is equal to the input voltage (across Rs). Thus the combination of the op-amp and emitter-follower can be considered as a perfect voltage follower working on the resistive load R3... i.e., perfect voltage-to-current converter (current sink). It is interesting to see that both resistors Rs and R3 act as current-to-voltage converters (V = I.R) but the function of R3 is "reversed" by the negative feedback... and it acts as the dual voltage-to-current converter (I = V/R). This is a fundamental property of negative feedback systems. The op-amp non-inverting amplifier is another example of this unique feature where the attenuating voltage divider is made to act as an "amplifying voltage divider". The resistors R1 and R2 do not play any role since the op-amp inputs have high resistance and should be removed.
H: Improving a permanent magnet alternator. Replacing magnets / reducing air gap In the case of a 3 phase PMA radial flux type, used as a wind turbine, with magnets on the surface of the rotor, is it safe or even doable to stack another magnet with the same geometry on top of the existing ones, to reduce the air gap ? (supposing that the air gap is poorly designed in the first place by being way too large). I suppose that the current magnets are glued to the rotor frame by epoxy. It would be cumbersome to knock them off easily. The new magnets would be glued with epoxy too on top of the first layer. The main problem I would see in this case is that a reduced air gap may cause the rotor to seize by striking the stator poles due to axial tilt in case of high vibrations / high RPM. or, the added magnets becoming loose at high RPM. Any other mechanical issues to consider besides higher torque ? In case of performance, I would expect higher starting speed and overall lower RPM at a given wind speed, lower destructive RPM tolerance, but higher power output and overall higher efficiency. Anything I should take into account besides higher EMF at a given RPM for my rectifier circuit ? AI: If the magnets are epoxied as securely as they should be to resist centrifugal force and vibration, removing them may be quite difficult. However, if destroying them in removal is acceptable, it can probably be accomplished. Since grinding may be necessary, precautions should be taken to prevent dust from being inhaled by anyone. Attaching another layer of magnets with epoxy might be possible. The rotor would need to be re-balanced. If the air gap is excessive, reducing the air gap may be more effective than adding magnet material. I would think the speed at which the generator produces a useable voltage (starting speed?) would be reduced not increased.
H: Maximum value of back emf in dc motor? I have already been searching for this but couldn't find something reasonable I want to know that when will be the value of back emf maximum in a dc motor? At no load or full load or half load? AI: The back EMF is proportional to speed. As the load increases, the speed will decrease. That will cause the back EMF to decrease. Note that there is a voltage drop across the armature resistance. That voltage drop plus the back EMF is equal to the supply voltage.
H: Why does a rise in ambient temperature cause a base-voltage-biased grounded emitter stage to saturate? This is actually exercise 2.9 in Horrowitz and Hill 2nd Ed. which is posed as: Verify that an \$8 ^\circ C\$ rise in ambient temperature will cause a base-voltage-biased grounded emitter stage to saturate, assuming that it was initially biased for \$V_C = 0.5 V_{CC}\$. The section it appears in is talking about the shortcomings of the single-stage grounded emitter amplifier, and mentions three deficiencies: (1) non-linearity such that the gain varies from 0 to a (negative) maximum due to the lack of an external resistor on the emitter (2) a variation in input impedance \$Z_{in} = h_{fe}r_e = 25 h_{fe} / I_C\$ (for \$I_C\$ measured in mA and the 25 is 25mV which is an estimate for \$V_T\$ in the Ebers-Moll equation, \$h_{fe}\$ is the gain and \$r_e\$ is the internal resistance) (3) the effect of temperature on biasing according to which \$V_{BE}\$ varies by about 2.1mV / \$^\circ C\$ for fixed \$I_C\$. Apparently \$I_S\$ (the saturation current of the transistor) is roughly proportional to \$1/T\$, such that for fixed \$V_{BE}\$ the collector current \$I_C\$ increases by a factor of 10 for a 30\$^\circ\$ rise in temperature. It feels like the answer to the question is a trivial consequence of (3), but I'm struggling due to not really understanding the conditions of transistor saturation, and how the \$8^\circ\$ rise in temperature can cause saturation regardless of \$V_{CC}\$. AI: 3) fixed Vbe and the collector current Ic increases by a factor of 10 for a 30∘C rise in temperature. If 10x Ic occurs from 30'C rise so 8'C rise implies 8'C/30'C *10= 2.667x rise in current. Given Vc=Vcc/2 Re=0 thus saturation is twice the current at Vce=0 , 8'C is sufficient.
H: Why does a thermocouple have two outputs (V+ and V-)? Does that mean that there is two different voltages? The potential is used to determine the temperature. (?) I understand it's the seebeck effect dictates how the thermocouple works. Apologies if I'm missing a simple concept. I have had no experience with thermcouples prior to my current thermopile project. Thanks! AI: It's effectively a voltage source, a bit like a battery (given fixed thermal conditions). So, of course, it has two terminals. The potential between the two terminals, measured at the meter, is a nonlinear function of the temperatures at Th and the temperature at T0, so if you want the temperature at Th you must measure the temperature at T0 using some method other than a thermocouple (or, as they did in the days of yore, place the two junctions at T0 in an ice-water slurry to control the temperature at 0°C). Very roughly, the voltage output is proportional to the temperature difference between Th and T0. To get a more accurate measurement you can calculate the thermocouple voltage equivalent to the temperature T0 if it was compared to a reference junction of, say 0°C, and then add that voltage to the measured voltage at the meter to get a new voltage, then find the temperature corresponding to that voltage with a 0°C reference junction (if T0 was 0°C rather than whatever it really is). If T0 happens to be 0°C (perhaps due to that ice-water bath) the voltage will be 0uV.
H: LT1510 as a CC and CV Battery charger LT1510 is a battery charger IC that can be configured to charge any kind of battery pack, I am interrested about the circuit given in the application note referenced AN68F page 24/36. The IC is used a Li-Ion battery charger. I want to understand the circuit around the OP AMP U2, and how U2 reset U3 (circled in red). My first question, what is the role of C9, a capacitor that is connected between the inverting and non-inverting pin of U2 ? From the application note, I understood that U2 takes effect in the final stage of charging (Constant voltage charging). It says: When the LT1510 is charging, U2 compares the voltage across the LT1510 internal 0.2ohm sense resistor to the voltage across R7. When the voltage across the sense resistor is lower than the voltage across R7 (or, alternatively, when the charge current drops below 75mA), U2’s output voltage drops. When U2’s output voltage drops, U2 latches at low state via CR5. U2’s output is connected to the RESET pin of U3. What is the voltage across R7 ? I am not sure, but I think, first R7 with R8 and R9 make a voltge divider of VBAT+VRsense (VBAT is the battery voltage and VRsense is the voltage across the internal sense resistor). How does U2 compares between VRsens and VR7 ? (the inverting input of U2 is connected to the battery pack and the non-inverting input is connected to a voltage divider ) U2 output is connected to VIN through R10, so how does this output latches at low state via the diode CR5? AI: Warning: Despite this being an LT application note, this circuit is not "fit for purpose". As an example of various techniques it does a good job. As a practical solution to a real world problem it would be a bad choice to use. As described below: U3's function is unnecessary and will damage batteries through overcharging in real world applications. U2 is a majestic device but is overspecified in this application and liable to cause stability problems due both to it's high performance and to the application notes failure to address these factors adequately. _______________________________________________________________________ To make sense of the description it helps to refer to the LT1510 internal block diagram - seen here. The portions of especial relevance are marked in green. First, a few "notes": RS1 is the 0.2 Ohm sense resistor referred to in the text. When connections to or around an IC "do not make sense" it often helps to refer to the device's internal connection diagram. You will find that the positive ends of R7 and internal RS1 are connected, allowing their relative voltage drops to be compared. C9 in the application note is across the comparator inputs and serves to remove high frequency variations to stop the comparator triggering on noise. The time constant of R6, R7, C1 is about 100 μs - rejection of current variations due to the switch mode converter charging cycles is quite likely a major consideration. Making the super sensitive comparator sit down and not oscillate due to stray capacitance is probably another. U2 output is connected to VIN through R10, so how does this output latches at low state via the diode CR5 ? The LT1011 datasheet here is an "open collector" comparator and requires an output pullup load. R10 serves that role. The question re output latching is essentially undelated to R10 and is described below. The LT1011 is a superb device and it's attributes are well suited to demanding high speed applications - but it is vastly overspecified for this role and will probably cause problems if used as shown here without extra components. Layout is critical, and specific supply bypassing and balance pin bypassing (none of which is shown in the application note example) is a very good idea indeed. It has a gain bandwidth product of over 10 GHz (!) and is liable to "burst into song" under any provocation. The datasheet describes precautions which should be taken to prevent this, but, using something more usual and far cheaper would be a better solution. (Sometimes application note writers like to use fancy components to 'strut their stuff' or sell ICs or ...? - this is an example. ______________________________________________________________________________ Battery charging current enters the "sense" pin, passes through RS1 and goes to the battery via the "battery" pin. Battery charging is terminated when charge current falls below some preset value Iterm (say). When this occurs the voltage across RS1 due to charging current falls below V= IR = Iterm/Rs1. U2 acts as a comparator, comparing the voltage across RS1 (due to charging current) with the voltage across R7 (which sets the charge terminate threshold). The "high ends of RS1 and R7 are commoned at "sense" and the drop across R7 is R7/(R7+R8+R9) x Vbat. Which here = 2k2/(2k2+560k+430k) x Vbat = 2.2/992.2 x Vbat = 0.00222 x Vbat = Vbat/451 (!) At end of charge when the comparator switches the voltage across R7 is equal to the voltage across RS1 = Ichg x 0.2 Ohm = Iterm x 0.2 SO Vbat/451 = Iterm x 0.2 or Iterm = Vbat/90.2 For 2 cell LiIon and Vbat_term = 2 x 4.2V = 8.4V Iterm = 8.4/90.2 = 93 mA. Without looking through the text for the battery capacity, that would be slightly less than C/10 for a 1000 mAh battery - which is "Road Warrior" level charging. Or about C/2 for a 200 mAh, or C/5.4 for a 500 mAh battery - which would be conservative and well charged levels respectively. Finally, once the comparator switches, U2 output goes low, CR5 conducts and the R 7 8 9 divider is driven low taking the non-inverting input 'very low' thus latching the end of charge state. E&OE. ie there MAY be a numerical mistake in there somewhere, as happens, but the principle applies. NOW, I will read the app note text ... :-) OK - correct as far as I went. AFTER the comparator trips the timer at right (U3) adds an extra period of top up. I consider that this would be VERY inadvisable and it violates just about every (maybe every) CCCV charging application I have ever seen (and I've seen a few). They say the trip point is 50 to 100 mA so my 93 mA lies in that range. I'd say the circuit was overly OK albeit a little complex if you left off the U3 circuitry, and that adding U3 is not only overkill but is literally battery killing. Just because it's in an app note from a superb manufacturer doesn't make it right, alas. If it was by one of the now dead great names I'd go and have another look at my reasoning but as it is by 'application engineering staff". I'd suggest that in this case they got too enthused in demonstrating a useful idea. The app note is dated 1996 - the above applied to LiIon cells of that date and to the latest ones now - with a slight increase in the CV voltage for some of the very latest cells.
H: Do fuse tap turn off with kill switch? Do fuse box connected circuits turn on and off with the vehicle? I’m trying to connect my Arduino to my motorbike using a fuse tap to the fuse box. I want my circuit to turn on and off with the bike. I’m wondering if the power is cut off on the fuse box by the kill switch (or the key) or if I have to connect the negative pole to my kill switch negative or if there are other solutions. Ps. I’m a computer scientist, I’m not going to weld anything, cut wires etc. AI: Choose the fuse that has the correct function and install the fuse tap. Consider that the supply wire now has to provide that extra current. For the ground a suitable point on the frame is sufficient or if you are close to the battery then the neg post or earth cable connection will do.
H: \$V_o/V_s\$ of given circuit I'm trying to calculate $$\frac{V_o}{V_s}$$ of the given circuit, but don't really know what to do with the voltage dependent voltage source. As I'm not an engineering student, this stuff is very foreign to me and I don't know how to approach a problem like this. Also, what type of filter would this be? To me it looks like a high pass filter, but am not sure. Any help is much appreciated! AI: Well, we have the following circuit: simulate this circuit – Schematic created using CircuitLab Using KCL, we can write: $$ \begin{cases} \text{I}_1+\text{I}_3=\text{I}_2\\ \\ \text{I}_3+\text{I}_4=\text{I}_0\\ \\ \text{I}_0+\text{I}_5=\text{I}_4\\ \\ \text{I}_2+\text{I}_5=\text{I}_1 \end{cases}\tag1 $$ Using KVL, we can write: $$ \begin{cases} \text{I}_1=\frac{\text{V}_1-\text{V}_2}{\text{R}_1}\\ \\ \text{I}_2=\frac{\text{V}_2}{\text{R}_2}\\ \\ \text{I}_3=\frac{\text{V}_0-\text{V}_2}{\text{R}_3}\\ \\ \text{I}_4=\frac{\text{V}_0-\text{V}_3}{\text{R}_4}\\ \\ \text{I}_4=\frac{\text{V}_3}{\text{R}_5}\\ \\ \text{V}_0=\text{n}\text{V}_2 \end{cases}\tag2 $$ Now, we can solve for: $$\mathcal{H}:=\frac{\text{V}_3}{\text{V}_1}=\frac{\text{n}\text{R}_2\text{R}_3\text{R}_5}{\text{R}_3\left(\text{R}_1+\text{R}_2\right)\left(\text{R}_4+\text{R}_5\right)-\text{n}\text{R}_1\text{R}_2\text{R}_4}\tag3$$ Now, we transform to the s-domain (Laplace transform). We know that: $$\text{R}_2\to\frac{1}{\text{sC}}\tag4$$ $$\text{R}_4\to\text{sL}\tag5$$ So, in the end we get: $$\mathcal{h}\left(\text{s}\right):=\frac{\text{v}_3\left(\text{s}\right)}{\text{v}_1\left(\text{s}\right)}=\frac{1}{\text{sC}}\cdot\frac{\text{n}\text{R}_3\text{R}_5}{\text{R}_3\left(\text{R}_1+\frac{1}{\text{sC}}\right)\left(\text{sL}+\text{R}_5\right)-\frac{\text{n}\text{R}_1\text{L}}{\text{C}}}\tag6$$ Now, in order to see what kind of filter this is we need to look at the amplitude of the function when \$\text{s}=\text{j}\omega\$. So, first we have to expand: $$\text{R}_3\left(\text{R}_1+\frac{1}{\text{j}\omega\text{C}}\right)\left(\text{j}\omega\text{L}+\text{R}_5\right)=\left(\text{R}_1\text{R}_3-\frac{\text{R}_3}{\omega\text{C}}\cdot\text{j}\right)\left(\text{j}\omega\text{L}+\text{R}_5\right)=$$ $$\text{R}_1\text{R}_3\text{j}\omega\text{L}+\text{R}_1\text{R}_3\text{R}_5-\frac{\text{R}_3\text{j}\omega\text{L}}{\omega\text{C}}\cdot\text{j}-\frac{\text{R}_3\text{R}_5}{\omega\text{C}}\cdot\text{j}=$$ $$\text{R}_3\left(\text{R}_1\text{R}_5+\frac{\text{L}}{\text{C}}\right)+\text{R}_3\left(\text{R}_1\omega\text{L}-\frac{\text{R}_5}{\omega\text{C}}\right)\text{j}\tag7$$ So, we get: $$\left|\underline{\mathcal{h}}\left(\text{j}\omega\right)\right|=\frac{1}{\omega\text{C}}\cdot\frac{\text{n}\text{R}_3\text{R}_5}{\sqrt{\left(\text{R}_3\left(\text{R}_1\text{R}_5+\frac{\text{L}}{\text{C}}\right)-\frac{\text{n}\text{R}_1\text{L}}{\text{C}}\right)^2+\left(\text{R}_3\left(\text{R}_1\omega\text{L}-\frac{\text{R}_5}{\omega\text{C}}\right)\right)^2}}\tag8$$ In your application, we know that \$\text{R}_3\to\infty\$. So we get: $$\mathcal{X}\left(\omega\right):=\lim_{\text{R}_3\to\infty}\left|\underline{\mathcal{h}}\left(\text{j}\omega\right)\right|=\frac{\text{n}\text{R}_5}{\sqrt{\left(\text{R}_5^2+\text{L}^2\omega^2\right)\left(1+\text{C}^2\text{R}_1^2\omega^2\right)}}\tag9$$ Now we look at two cases: When \$\omega\to0\$: $$\lim_{\omega\to0}\mathcal{X}\left(\omega\right)=\text{n}\tag{10}$$ When \$\omega\to\infty\$: $$\lim_{\omega\to\infty}\mathcal{X}\left(\omega\right)=0\tag{11}$$ So we have a lowpass filter. And we want to know the roll-off of this filter: $$\lim_{\omega\to\infty}\left(20\log_{10}\left(\mathcal{X}\left(10\omega\right)\right)-20\log_{10}\left(\mathcal{X}\left(\omega\right)\right)\right)=-40\space\text{dB/decade}\tag{12}$$ $$\lim_{\omega\to\infty}\left(20\log_{10}\left(\mathcal{X}\left(2\omega\right)\right)-20\log_{10}\left(\mathcal{X}\left(\omega\right)\right)\right)=-40\log_{10}\left(2\right)\space\text{dB/octave}\tag{13}$$
H: Is there a way to extend the length of wires for I2C connection? I would like to connect a BME280 sensor to an Arduino board through the i2c bus connection. I read that i2c was not meant to be used over long wires and will not work properly over 50cm. I want to get it to work within 5-15m. Is there some sort of chip or device I can to attach to one or both sides of an i2c connection to extend the wire length? I would like to use only one power source, not two, and have the wire carry the power over those 5-15 meter as well, if that is possible. P.S. Pardon my drawing. My electronics skills aren't up to the level of making a schematic AI: I suggest you read AN10710 by NXP, which discusses the P82B715 I2C Bus Extender. This should cope with 15m of cable, particularly if you run I2C at less than the maximum speed.
H: How to get the memory state of SRAM on power up? I am working on IoT authentication using a physically uncloneable function (PUF). I have read several articles which suggest that SRAM PUF can be used for that purpose. However, I want to know how can I read the SRAM memory state? Are there any existing implementations in C or C++ which can be used to retrieve the SRAM PUF fingerprint? If there are some libraries, can someone please point out those? AI: If you want to read the SRAM content right at power up and try to do that in C or C++ you are going to have some trouble....particularly on an embedded type system....because some RAM content may be zeroed at startup and other parts may be used for initial stack content. There are also some high reliability embedded applications where ECC is used for the RAM and this needs to be initialized by writing, in its entirety, to make sure all the check bits are set properly before any of the RAM is read. A read with an invalid ECC check can result in system reset or error hang. This memory zeroing is typically done inside the "startup code" that is the first thing to execute when the MCU comes out of reset and fetches its first execution address from the reset vector. "Startup code" is typically written in the native assembly language of the MCU for ease of coding at the "bare metal" level of processor resource access. There are also performance reasons for the use of the assembly language. Startup code packages that come with various C/C++ tool sets are often complex code because the code will be highly parameterized with conditional directives to deal with a multitude of different possible hardware configurations. I mention this because you will have to deal with this based upon what I state in my next paragraph. To be able to read the SRAM content right out of the reset vector at power on you are going to have to either modify the "startup code" or add a new layer of "bare metal" code between the reset vector and when the startup module takes control.
H: Why do laptop batteries have so many terminals? Why do laptop batteries require so many terminals (upwards of 10 or more!)? Why not just a positive, negative, and a data pin? All the phones I've used only have three terminals, and I would think these these are more or less comparable devices, separated by scale. All other devices I can think of using or taking apart only use two to three connections/terminals. Likewise, you remove the laptop battery, it runs fine with just the positive + negative of the power adapter, and maybe a data pin. It seems as if the number of terminals has been growing, too. Is there a reason for the terminal-wagging competition amongst laptop makers? AI: Google "laptop battery pinout". Typically you'll have: Plus and Minus power terminals (maybe several contacts each for higher current) Some form of communication like I2C Temperature sensing (for example a thermistor) Your typical laptop battery has several cells in series, so it requires balancing which is usually implemented in the battery management PCB inside the battery. This also handles protection for conditions like short, overcharge, overdischarge, etc. The laptop itself doesn't need to know the details about internal battery chemistry, that's the battery management PCB's job. Also important is reporting state of charge accurately. This is difficult to do with voltage alone on Lithium battery, so you might find something like a coulomb counter, which implies values stored in a RAM which must be in the battery. The I2C bus also allows communicating useful information like battery capacity, maximum charge current for the charging circuit, or authentification for higher profit margins... A cellphone runs on one cell, thus it does not require balancing or other complex functions. It will most likely just have plus, minus and temperature. If the battery is removable then it should have a protection, at least against shorts, in case the user stores a battery along with metal objects like keys. If it is internal to the phone, then it probably won't have a protection PCB.
H: 60VAC measured when light switch is off I'm replacing a fluorescent ceiling light with an LED fixture. I removed the entire chassis, ballast, etc, and now I'm just left with the hot/neutral cabling (no ground wire available). I wanted to verify that the power was off before I started getting too touchy with wires, so I grabbed my multimeter to measure the AC voltage. With the light switch on I measure 120VAC, but with the light switch off I measured about 60VAC. I then flipped the circuit breaker off for the circuit that the light is on, and it measured about 0VAC. Would 60VAC ever be normal for a switch being "off"? This is an older house, no idea about the quality of the wiring, and I noticed that there is an old doorbell transformer in the basement, if that could possibly explain anything AI: There is always a parasitic capacitance across the switch and also between wires in conduits. This capacitance is small but if the wires run together in a conduit, it can be enough to let a few µA AC current through. The internal resistance of the multimeter is very high in voltage mode, but it is not infinite. What it is measuring is the output of a voltage divider, with the parasitic capacitance in the upper leg, and the multimeter input resistance in the lower leg. If your multimeter has an input resistance of 10 Mohms, a parasitic capacitance in the hundreds of pF across the switch is enough to register 60VAC on the display.
H: Voltage at comparator entrances I have the following problem. I have to fill in the blank spots in the picture below. I'm not sure how to answer this question. My book just says: If \$V_+ > V_-\$ then \$ V_{out}=max\$ If \$V_- < V_-\$ then \$V_{out}=min\$ Is there an equation of sorts that will help me solve this problem? AI: For this question, all you need to know is that opamp inputs don't draw current. This turns it into a straightforward resistor divider. simulate this circuit – Schematic created using CircuitLab You can calculate it in a few different ways, but the way it probably wants you to do it is using current. In the case where V2 = 0: What is the total current through the resistors? Knowing the total current, you can calculate the voltage drop across R1 Then you just add this voltage drop to V1 to get V+ The same thing happens for V2 = 5V
H: Help Identifying a capacitor WITHOUT a capacitance meter and no markings? I have a DELL 2408WFP Monitor purchased in 2009 that I am currently using right now. I have an issue with that monitor where it runs hot and shuts off after a certain period of time. This could be due to a number of issues but with limited testing equipment, I'm planning on taking a shotgun approach to solving the issue. I suspect this could be due to dying/aging capacitors on the power supply board and would like to swap them out sine a few look suspect. All the heat seams to be coming from the power supply board and if it gets hot enough the monitor shuts off but the power light stays on. I know this could also be the inverter board but since it seams so heat dependent and since all the heat seems to come from the PSU I'm going to start there. Also putting a fan behind the monitor keeps it alive for hours on end without issue. Without the fan, it powers off after about two hours of use (and I mean box fan type fan). I have identified all capacitors on the Power Supply Unit except for one that does not have a visible farad rating because it is glued down to the board. Can anyone help to identify this capacitor? Also any tips on removing that silicone glue? This is the full board: I know I have the power pin disconnected in that last picture. The inverter is on the left and PSU on top. I am guessing that unidentified capacitor is 450V based on the cut-off marking on the bottom. I'm not sure what the "5A" marking is. Appears to be rated for 105degC. AI: Service manual says it's 150 uF 450 V in the first schematic pages. Later on it is said to be 120uF. If the silicone is soft, a hobby knife or equivalent should be able to cut it.
H: Why is this solenoid valve consuming so much power? The circuit below drivers a pump and a valve, the pump works fine, the valve consumes much more current than I'd expect. The valve is rated 2W at 12V (600mA) i'm more like 15W. The PUMP_DRIVE signal isn't PWM, interestingly using a PWM signal with D = 0.5 doesn't even turn it on. What am I doing wrong? Do I need to add current limiting? Solenoid Valve AI: The product description is simply wrong regarding power rating, as You can read on the comments from another user: "The consideration is the fact that they draw 1.5A continuous while open so you need an appropriate power source and they are dissipating 18W or so of heat. They might overheat with no cool liquid passing through them but can't say for certain" You can easily measure the solenoid resistance and calculate total power.
H: Logic inside a 3 or 4 way light switch? I'm trying to understand what is the logic or mechanism used in 3 way or 4 way switches, in order for all swtches to work so that no matter the position of each switch, the lights go on or off without any other switch affecting the others. So, are they internally using any logic gate that work connecting each swtich, or is it using any kind of flip flop latch or anything like that? Sorry for this question that might look stupid, but I am trying to learn electronics by myself and came up with this question. AI: A "3-way" switch is SPDT. A "4-way" switch is DPDT inside. They're used like this: simulate this circuit – Schematic created using CircuitLab The 4-way switch is simply crossing the connections, or connecting them straight through — sort of an XOR function. If you only need two switches, just connect SW1 and SW2 directly to each other. If you need 3 or more switches, add any number of 4-way switches in between as shown.
H: How can I set a wire to one of 3 voltages from a simple digital controller (e.g. Raspberry Pi)? Here's my problem. I have a device to control that needs to always have one of three voltages applied to it: 5V, or 6V, or 7V. And I need to be able to programmatically switch between them. What I have available is a general mirocontroller with GPIO pins that I can set at some different voltage (e.g. 3.3V). I have a feeling relay circuits could do this but I can't figure out how to switch between my 3 different voltages. Question summary: What circuit would implement this with simple components? How should I create the 3 voltages I need? Assume I can buy whatever simple parts from Amazon etc. AI: Two SPDT relays can be used to select the voltages like this: simulate this circuit – Schematic created using CircuitLab You will normally require transistors to drive the relays, as most microcontroller outputs cannot handle a relay's coil current. With GPIO2 Low, "7 V" will be selected. GPIO1 will have no effect. With GPIO2 High and GPIO1 Low, "6 V" will be selected With GPIO2 High and GPIO1 High, "5 V" will be selected
H: ESP32 I2C pin routing I have looked through several documents, but I haven't find found a clear answer which pins on ESP32 support I2C communication. Can the I2C be routed to almost any I/O pins? Are there certain pins which support I2C? AI: According to Section 4.2 the ESP32 datasheet, any GPIO pin can be configured as an I2C pin. There are two I2C interfaces (I2CEXT0 and I2CEXT1), and they can operate in either master or slave mode at up to 400kHz. Just make sure you don't use any pins that your module uses for another purpose. For instance, some ESP32 modules have status LEDs or external level-shifting circuitry. As mentioned by @jDAQ, the IO_mux and GPIO_matrix registers can be used to "rewire" the peripherals.
H: Difficulties in KVL signs (RL circuit) If we start from bottom left and rotate clockwise, then first we visit resistor negative terminal so I expect -Ri but the book says the below equation: So why isn't the Ohm's law for the resistor negative? (I follow this famous convention in KVL, whenever you encounter the positive terminal of an element you mark that expression positive else you mark it negative) AI: With the current flowing as shown, $$v_L = L\frac{di}{dt}$$ and $$v_R = -iR$$ with \$v_R\$ being negative as expected. Using \$v_R\$ and \$v_L\$ with KVL yields $$ v_R=v_L$$ or $$ -v_R+v_L=0 $$ Note there is a negative sign as well. Substituting the first 2 equations yields $$ -(-iR) + L\frac{di}{dt} = 0$$ $$ iR + L\frac{di}{dt} = 0$$
H: Guard Band Reduction in 5G compared to 4G Why is there less guard band required for 5G as compared to 4G .For 4G guard band are defined to be 10% of bandwidth on each side while guard band for 5G are defined in 3GPP 38.101-2. For ex If we take 15 KHz SCS and 10 MHz bandwidth guard band for 4G would be \$10MHz*0.1*2 = 2MHz\$ while for 5G it would be \$312.5kHz*2=0.625MHz\$ which is a drastic improvement. (i) Why was 10% of bandwidth was chosen as guard band for 4G? (ii)How has it been reduced in 5G? AI: (i) Why was 10% of bandwidth was chosen as guard band for 4G? That 10% was probably seen as a realistic value at the time. Realistic in the sense that it could be achieved with the available technology at the time. (ii)How has it been reduced in 5G? Note that the 5G standard doesn't say anything about how to implement the system, instead it describes how the system should behave. So the 5G standard doesn't tell us how to reduce the guard band it just tells us what it needs to be. Again the value is likely to be technology related. Technology has moved on so now we can have more processing power available to shape the signals such that a smaller guard band can be used. The shape (over frequency) of a 4G or 5G baseband signal is determined by how much (digital) filtering is used. Obviously more complex filtering could result in a signal that is "cleaner" so that the distance between channels can be smaller and smaller guard bands can be used.
H: MCU near sensors or on external board I currently work on a sensorbox which needs to measure analog signals (temperature, power consumption,..), digital signals (magnetic contacts, positions,...) and control digital switches (relays). All this magic is controlled by a PIC18F (has an ADC) and interfaced with a larger machine via CAN. This is just a laboratory setup (lots of cables, breadboard), where the controller is programmed using ICSP. Since the box should now be included in the machine I was wondering how to include the MCU, since the sensors are almost unreachable after the project is finished. Wire all the cables from the sensorbox to a 'safe' location where the MCU is located Include the MCU on the box and wire only the CAN to the frontend Since I am not sure that I have to change the code I would prefer option 1. Here I can reprogram the MCU on the front. OR Wire additional cables (the three programmer cables for the MCU) to the frontend. Whats the conventional method for such a problem? AI: Issues regarding option 1, in case you've not considered them yet: The biggest issues I think would be interference/noise/reflections on the cabling. You could end up with some very squiffy readings unless you ensure proper isolation/filtering (depending on the signals). The extra cabling increases both the BOM cost and the production time/cost. My personal view would be that it need only be the communications (and power?) cabling between the main device and the sensor box. This way, you make the sensor box functionally independent and more importantly - completely replaceable. With a bootloader on the MCU, you should even be able to reprogram it via the comms link. If you can't get a bootloader though, then the next best alternative would be to have an externally-accessible header on the sensor box.
H: What is this diode bridge used for in this circuit? I'm currently trying to figure out how this circuit work : I found this in a microchip documentation dealing with DALI (Digital Addressable Lighting Interface) protocol. The DALI protocol is mostly used to communicate with lighting equipment. I can already understand what is going on for the most part but I'm still having trouble with the function of the diode bridge. I know that diode bridge are used to convert AC to DC voltage. But in this circuit, the input is already a DC voltage, so I'm wondering what is this diode bridge doing exactly? AI: While the DALI bus has DC power on it, the wiring topology is not polarity-sensitive. That is, either wire can go on either pin of any DALI interface connector. From the Wikipedia page: "Each device has a bridge rectifier on its input so it is polarity-insensitive." https://en.wikipedia.org/wiki/Digital_Addressable_Lighting_Interface#Technical_overview
H: Bit(map) matching in hardware I want to check (in hardware) whether a bit is on (1) in a 64 bit number - the bit I want to check can be any bit but I'm only interested in checking that one bit. I get the number -ie offset - of the bit as a 6 bit number eg if that number is 0 I need to check bit 0, if the number is 17 I need to check bit 17 etc. Is there anything faster (even if more costly in power and area) than this idea: Input (bit I want to check) is X - so use a barrel shifter to left shift 1 by X and store result in Y Then Z<-(Y AND Bitmap) OR together all bits in Z - if output is 1 then bit is on, otherwise bit is off. I suspect that 1 is the costly operation, so any ideas about a faster way of doing that would be particularly welcome. AI: I'm assuming you want to do this in hardware, rather than software, because of the tags you selected. I suspect that the fastest implementation would be to use a 64-to-1 multiplexer with X as the select input. The output of the multiplexer is the bit value you want to check. No additional logic is needed...if the mux output is 1 then the bit you want to check is 1, and vice versa. The actual implementation of the mux itself may depend on how you intend to build the thing. In CMOS VLSI it might be best to use six levels of 2:1 multiplexers. In an FPGA just write the Verilog and let the tools do the heavy lifting.
H: How to suppress a short circuit to prevent fuses from blowing I have a circuit that is modular and one of the modules involves interfacing from an external relay. Now this sub-module has a fuse set to blow a bit below what the main module's max current. (why put a fuse when the main module has over current protection? So that when this sub module fails it wont take the whole system down with it) This sub modules can cater upto 24 external relays and in the image below i will be showing how one channel is setup What i would like to protect is the fuse from blowing from a false short circuit(not really false, since it is doing its job) in the cenario that the user might incorrectly wire the connection and shorts pin A and pin B. I need a current limiting solution so that the fuse wont blow up and the user of the sub module can rewire and try again. The circuit is designed to trigger one relay at a time, a signal will be sent by a portexpander(not shown in picture) to the bs138 mosfet to open. the signal typically last around 250ms - 1000ms. I have considered a polyfuse but it to slow to react the main module protection will trip first. Another is adding a resistor in series with the relay line but it is either a bulky high watt rated resistor or a high value resistor that might not make the relay trigger because of voltage drop. there is also the problem since there is a short the power to the sub-micro controller will be cut, so this protection circuit i am looking for would probably happen in either series or parallel of the connection for the external relay I need current limiting solution between pin A and pin B so that when A and B is shorted by the user and the mosfet turns on the fuse wont blow how do you recommend to solve this problem, Thank you. AI: You have described an application for a slow-blow fuse or a Polyfuse to heat up faster than the load. These tend to operate between 85’C and 125’C by design. A better design would define that actual load V vs I and Fault current then define limitations for cost and area. It might be feasible to use a 5.5V to 6Vdc input with a 3 terminal regulator with OCP,OTP and UVLO with say 0.4V dropout at 1A for only $0.80 (100pc) in a SOT-223 package e.g. MCP1826ST-5002E/DB Intelligent auto-resettable breakers need accurate specs for break and retry algorithms. - But for thermal loads such as motors used to close car windows, a thermal switch is used that acts quicker than the motor to cut off power but takes just as long as the motor to cool off and retry power again. this can be something like 1 sec OFF and 5 to 10 sec ON retry. This usually uses a carefully selected polyfuse controlled relay. Also replace the main fuse with a slow blow fuse.
H: Which footprint to use for LM324 SO-14 in KiCad? I want to use the LM324 quad-amp in my PCB design. KiCad already provides the symbol in the Amplifier_Operational package. However I don't know which footprint to assign. I tried assigning the generic Package_SO:SOIC-14_3.9x8.7mm_P1.27mm footprint which has the correct mechanical dimensions. However, there does not seem to be a connection between the pads in the footprint and the symbol. Therefore I have the following questions: Does KiCad already provide a footprint for LM324? If KiCad does not already provide a footprint, how to connect the generic footprint's pads to the correct nets? AI: A main concept of KiCAD is the separation of symbols from footprints. There is no LM324-specific footprint; you have to pick the correct one. Some devices (usually ones which have exactly one footprint, or one category of footprint) have footprint filters added to them; all the others do not. In the case of the LM324, this device is available in eight different packages from the manufacturer, spread across several categories, so it makes sense to let the designer pick the appropriate footprint. In the schematic editor: Make sure you are using all of the correct "gates" of this chip. That is unit A, B, C, D, and E (the power pins), all of one device such as U1. Make corrections if it tries to rename any of them, such as mistakenly naming the power pins as U2E. Click the "Assign PCB footprints to schematic symbols" button. Assign the "Package_SO:SOIC-14_3.9x8.7mm_P1.27mm" footprint to U1. Click the "Generate Netlist" button. This saves a file with a text representation of all the connections present in the schematic. Of course, there must be wires or connections made for this to do anything. Close EEschema, open PCBnew, click the "Load Netlist" button. There should be no errors reported. Click "Update PCB" and place the parts. "Rat's nest" wires should now be visible. This should look like the following on version 5.1.5-1:
H: Do need to solder on the pin headers panels on the edges of a protoboard or can I just attach them without solder? I have my first circuit on an Arduino Protoboard, and I was wondering if I had to solder the header pins to the board like the other components? AI: What you are referring to as a Seesaw, is actually a pin header. Adafruit makes a Seesaw microcontroller and you used the incorrect terminology. The pin headers must be soldered onto the PCB (Arduino Protoboard) in order for the circuit to function.
H: Multi Transistor Circuit I made a multistage transistor circuit to see, with such a high gain, what the output would be? (Bottom Terminal to +5V, Top Right Terminal to Gnd) I've seen circuits with three transistors in this fashion, being used to detect an AC source nearby. The base of the first transistor is connected to an antenna sort of structure and when bought near AC source, the LED turns ON. So I thought to add more stages to see what else it can do? I expected that it may detect the RF communication signals or it may detect any AC source from much larger distances(I guess I was expecting miracle out of a random built) . But it showed wierd behavior. When powered, it would turned the light ON and when any part of the circuit is touched with a finger the light turns off. And after releasing the finger, the light slowly-slowly rises to its maximum brightness again, until it is touched again. When it does not behave like just described, it behaves like this: When I move close to the circuit light intensity increases and when I move away it decreases. I have vague idea that it may have something to do with the capacitance between me and the circuit, but I don't understand how exactly it work out for this circuit? Is this phenomenon well known and is this thing studied under any subject? AI: Electromagnetic Fields with free space impedance allows one to capture a tiny bit of current from your body acting as a dielectric antenna with some impedance to the amplified semiconductor switch to the LED. What comes out would be a half wave rectified pulse of light from the radiated low frequency grid voltage electric field in V/m. Putting another finger in the middle could be shunting the Vbe to cutoff. Electric field from a line in the wall will be somewhat inverse to distance and proportional to length of you and conductor as an antenna. simulate this circuit – Schematic created using CircuitLab maybe something like this
H: Comparing voltages bigger than V+ on LM311 I want to use an LM311 comparator to compare my 12V battery voltage to some reference I have, but I want it to output 5V so I can read it with my microcontroller. Due to this, I'd think I have to set the V+ pin to 5V. If I do this, can I still input 12V to it? AI: The LM311 is not very good to power with just 5V as it will have a very restricted input range (only 0.5V to 3.5V when powered from 5V) I would power the VCC+ from a 12V supply (possibly the voltage you are measuring if appropriate). A more modern comparator would not have this restriction - even an LM393 which is 40 years old would be better. The output is open collector so you can use a pull-up resistor to 5V to produce a 5V logic signal even if powering it off 12V. The allowable input range is only up to (VCC+) - 1.5V, which is 10.5V if the supply is 12V so you cannot put the 12V you wish to measure directly into the IN+ pin. You will need a divider to bring it below 10.5V even when the 12v level is at it's highest (for example if it could be 15V then divide it so the input voltage is 10.5V under that condition).
H: Did I solve the Thevenin equivalent circuit left of the terminals correctly? I get $$v_{Th} = v_{ab} = v_{4Ω} = 0.8V$$ The solution is $$v_{Th} = v_{ab} = 1.2V$$ Isn't 1.2V the voltage drop over the 6Ω resistor? AI: You are correct. The 2v source effectively eliminates the 6V source and the 6 ohm resistor in series with it since it is in parallel with those elements. Thus you are left with the 2V source across a simple voltage divider consisting of the 6 ohm resistor and the 4 ohm resistor. The voltage across terminals a and b will then be (4/(4+6)*2V or 0.8V. You are correct in that 1.2V is across the 6 ohm resistor.
H: RCD Live and Neutral Protection I understand that an RCD device helps to provide protection against electric shocks when completing a circuit between Live and Earth wires, where the Neutral and Earth wires are separated before being wired into the consumer-unit/distribution-panel. My understanding is that the RCD device works by detecting a difference between the current in the Live wire and the return current in the Neutral wire. When a tolerance is exceeded the RCD device opens the circuit, to prevent further current flow. Does an RCD device provide any protection if the electric shock occurs between the Live and Neutral wires, given the current will still be returning on the Neutral path? AI: If all of the shock current flows between live and neutral then there is nothing the RCD can do. It does not know that the load it is presented with is a human body rather than a peice of electrical equipment. However it is worth noting that the most dangerous shocks are those that pass through the core of your body. Unless you do something really stupid like grabbiing the live with one hand and the neutral with the other, live to neutral shocks are most likely to be limited to an extremity, whereas live to earth shocks have a much greater chance of passing through the core of your body.
H: How to switch a leaded solder soldering station over to lead free solder I have a soldering station which used leaded solder. I now want to switch over to lead free solder and as such I want to clean the tip. How can I clean the tip of the soldering iron such that I can use the lead free solder with it? AI: In an ordinary engineering development situation of working on prototypes or other non-production items not needing to meet stringent material or longevity standards, switching solder types would (if thought about at all) typically be done by a few extra cycles of tinning and wiping the tip. As Leon points out, adjusting the iron temperature may also be needed. If you are facing a requirement which a few cycles of such re-tinning and wiping cannot satisfy, then you need a new tip, if not an entirely new setup and workspace. In functional terms if the decision is made to still use leaded solder at an engineering bench in an non-exempted industry where lead solder would not be permitted for production, then there's likely to be a mixing of alloys on an effectively continuous basis, since there's a reasonable chance that lead free solder was used in the initial assembly of any machine-built prototypes, and a near certainty that it was specified to be used on any production items that find their way back into the lab for failure analysis, as a basis for prototyping new ideas or versions, etc. In most cases, once something has been open on an engineering bench, it shouldn't be thought of as "product" any more anyway.
H: Factor of 1/2 in RMS Voltage Given the following circuit, I would like to calculate the Thevenin equivalent of the voltage in RMS. I know that R = 400 [Ω]; e(t) = 40 cos(10^4 t) [V]. However, I can't understand why there is a factor of 1/2: AI: It's a voltage divider. \$V_{eq} = (e(t)_{RMS}R_2)/(R_1+R_2)\$... but if \$R_1 = R_2\$, then \$V_{eq} = e(t)_{RMS}/2 \$
H: Reading a high-speed rotary encoder with a Raspberry Pi I'm trying to setup a Raspberry Pi to measure the position and speed of 8 DC motors that all have incremental quadrature encoders. At full speed (unloaded), each encoder ticks at 3.3 kHz. I assume that this means that I have to sample the encoders at >13.2 kHz to measure the position without missing any ticks. (Note: I only drive one or two motors at a time.) The current board that I have has a MCP23008 I/O expander on it to sample the encoders, but I think that the I2C communication is too slow. With a simple Python script, I can sample a single encoder at ~1kHz: from adafruit_mcp230xx.mcp23008 import MCP23008 encoder_address=0x20 # Initialize I2C i2c = board.I2C() # Setup encoder reader (I/O expander) mcp = MCP23008(i2c, address=encoder_address) encoder1a = mcp.get_pin(6) encoder1b = mcp.get_pin(7) encoder1a.direction = digitalio.Direction.INPUT encoder1b.direction = digitalio.Direction.INPUT encoder1a.pull = digitalio.Pull.UP encoder1b.pull = digitalio.Pull.UP # Read encoders A = opt.encoder1a.value B = opt.encoder1b.value I also tried calling i2c-tools from my Python script, but this was insanely slow (~75Hz): import subprocess as sb output = sb.Popen("i2cget -y 1 0x20 0x09", shell=True, stdout=sb.PIPE).stdout.read() I then tried a simple C++ program, but this only got me up to 1.5 kHz (reading the entire GPIO register): /* encoder.h */ class Encoder { private: unsigned int device; // device address int file; // I2C file public: // Constructor Encoder(int device_address); // Read motor encoders unsigned char read_encoders(); }; /* encoder.cpp */ #include "encoder.h" #include<iostream> #include<sstream> #include<fcntl.h> #include<iomanip> #include<stdio.h> #include<unistd.h> #include<sys/ioctl.h> #include<linux/i2c.h> #include<linux/i2c-dev.h> Encoder::Encoder(int device_address) { std::string name = "/dev/i2c-1"; this->device = device_address; this->file = open(name.c_str(), O_RDWR); if (this->file < 0) {throw;} if (ioctl(file, I2C_SLAVE, device) < 0) {throw;} } unsigned char Encoder::read_encoders() { // Write to GPIO address unsigned char buffer[1] = {0x09}; if (::write(file, buffer, 1) < 0) {throw;}; // Read GPIO register unsigned char output[1]; if (::read(file, output, 1)<0) {throw;}; return output[0]; } My questions are: Is there any way to use my current board to sample up to 13.5 kHz? I.e., is there any way to make my Python/C++ programs run faster? Would it be better to connect the encoders directly to the GPIO pins? (I didn't want to do this initially because I have so many motors + other peripherals.) Or, is it necessary to use a dedicated microcontroller? I.e., a microcontroller that keeps track of the relative position, which I can then periodically send to the raspberry pi. I'm hoping to have a simple PID loop to control the speed of the motors + detect if they hit their limits and stop moving. AI: For this many motors sampling becomes problematic. You need to switch to interrupts. There are several options for you to try. Connect INT output of MCP23008 to RPi and read IO expander only when you get pin-change notification. If communication is still slow, replace your expander to MCP23S08 and use SPI interface instead of I2C. This chip can support clock speed up to 10 Mhz. If that is still not fast enough, connect encoders to RPi GPIO (with some simple RC filters). You'd need something like RPi.GPIO for this. Finally, if everything else fails, use some cheap MCU to do decoding for you. In fact, you do not even need to send decoded pins to RPi in this case. Simply implement PID on this MCU and connect motor controllers to it as well, which will turn it into 8-channel servo. Then communication becomes much simpler: target positions are sent to MCU and current positions are sent back. And you will free almost all Rpi pins for other needs.
H: Why are GPU dies so much physically bigger than CPU dies? I understand that the physical sizes of microchips are generally limited by silicon yields. The larger your chip is, the more waste occurs when you hit a defect in the silicon and have to throw it away, and at some point this becomes unsupportable. I've noticed, however, that modern GPUs always seem to be significantly bigger than CPUs. High-end consumer GPUs seem to run in the 400-600 mm^2 range, with the RTX 2080 Ti sitting at a whopping 775 mm^2. It's a bit harder to find die sizes for high-end consumer CPUs, but it looks like the i9-9900K, for example, has a die size of 178 mm^2. The latest Ryzen generation has even smaller sizes, what with their "chiplet" architecture, with the largest single die being the I/O die at around 125 mm^2. Why are GPUs so much larger than CPUs? Is it something to do with silicon yields and chip architecture, or is there some kind of economic consideration? AI: GPUs are parallel processors. Their performance scales linearly with die size, while bigger dies can be run at lower clocks and so be more efficient while still being faster. Therefore, it makes sense to have the die as large as is economically feasible as it will be faster and more efficient. CPUs have much less parallelism, and so the return from adding more die area is much smaller, or for many applications, non-existent or even negative. Therefore there is a relatively optimal die size for a consumer CPU on each node, and it is generally quite small. The comparison to the comparison to the 175mm^2 9900k is actually slightly misleading, only about 100mm^2 is CPU. The remainder is a relatively large GPU, video decoder, IO, etc. Since there is also a minimum die size needed to fit all the IO pins, low power CPUs dies each generation are often mostly GPU to fill up the unused die space.
H: Is there a way to secure an 18650 battery pack together without soldering, spot-welding, or an end cap kit? I'm putting together an 18650 battery pack for an e-bike. The BMS is specified for 10s 36v 40a. I want to avoid soldering, spot-welding, or an end cap kit if possible. I thought I would make each group of 10 cells in series to reach 36v, and then those groups would be in parallel. This would allow me to just use dual tubes of 5x2 cells a piece to reach 36v. But from what I'm reading now, my understanding is that a BMS does not work like this. I should actually connect each group in parallel to reach desired capacity, and then connect the groups together in series. Assuming this is correct, is there a way to make the pack in a similar way as I was planning before, with tubes instead of a brick? AI: Constructing battery packs without hard mechanical connections is usually a very bad idea. Or worse. In some cases the environment may not be hostile BUT an ebike is not such an environment. 40A is at least 10S2P for any 18650s. (If you know of any manufacturer rated 40A discharge 18650s please point me to them - I'd be genuinely interested). So <= 20A / cell in 2P. Maybe 6.7A or 5A in 3P and 4P arrangements (deep ending on cells used). The odds of all those connections remaining good in an ebike environment is minimal. Spot welding / tabbing is cheap and easy. A DIY spot welder is easy and cheap compared to all the other input required. Soldering is doable with fine bridge wires BUT a bad idea. LiIon cells should not be soldered. Fine wires plus luck skill and speed may allow you to 'get away with it' Or not. Soldering: Soldering is a VERY bad idea - especially if you are not already skilled in the art of soldering. Heating the end of a cell tends to break the electrical connection and heat elements of the interior that should not be heated. While some people report good results this is by no means certain. A "quick dab" to keep temperature rise low is a good way to create terrible joints. Spot welding: There are numerous DIY spot welder descriptions on web. This can be done using a battery as an energy source, either directly, or by adding a large capacitor. A usually available method is to use a microwave over transformer. Instructions are given in each case, but the basic method is to remove the high voltage secondary and add a few turns of thick wire as low voltage secondary. Some people build elaborate mechanical electrode assemblies but something very simple indeed will suffice. Here are many googlabet search links to DIY microwave transformer spot welders and here are many images - each linked to a related page.
H: Explanation results of RC simulation wih LTspice/Micro-cap I built a basic schema with resistor, capacitor, ground and a battery in LTspice and MicroCap. I used transient analysis mode. I expected to see on a plot showing decreasing current and increasing voltage during launch eventually stabilising at 0A. For the voltage I was expecting the opposite (starting out at 0V and stabilising at Vbat), but I only get flat lines where capacitor voltage is equal to battery voltage from t = 0 and current 0A. I wanted to prove time to charge rule for capacitor where T = 5RC. I played with various time ranges and time steps but no luck. What I am doing wrong? AI: For LTSpice, you'll want to add an initial condition for the cap. There are multiple ways to do this, the simplest is to modify the part value like so: On a cap the ic set the initial voltage. On an inductor is sets the initial current. You can also specify node voltages etc. ltwiki topic on initial conditions has more details.
H: Is it possible to only fill a Li-ion BMS partially? I'm putting together an 18650 battery pack for an e-bike. The BMS is specified for 10s 36v 40a. Is it possible to use the BMS with only, say, 2 or 3 of the 36v packs of cells I made? The use case would be for a lower capacity pack. Edit: Removed part of question after clarification about proper BMS wiring. AI: Is it possible to use the BMS with only, say, 2 or 3 of the 36v packs of cells I made? The use case would be for a lower capacity pack... If so, would I have to wire it in a particular way? Such as connecting the charge/discharge positive to B2-B3 instead of B10? You seem to be confused about capacity and number of cells. Connecting to B2-B3 would lower the voltage, not the capacity. It also probably wouldn't work with that BMS (Ali-express locks up my browser so I can't check its specs). If you have several 36V packs then you can wire them in parallel for higher capacity, but each one should have its own BMS. If you make the packs yourself then make sure the cells are all identical (same part number, same voltage within 0.03V, same age - preferably new) or the BMS may not be able to balance them (assuming it even has a balancer).
H: Why is grounding required in a transistor circuit? This picture is given in my textbook regarding an n-p-n transistor circuit: Why is the emitter grounded? Why can't it be directly and only connected to the negative terminal of Vcc? AI: In most circuits the Ground symbol just indicates the point in the circuit that we will call "Zero Volts". It rarely indicates an actual connection to the earth. In your circuit, "Ground/Zero Volts" is connected to the negative terminals of the Vcc and Vbb supplies. Voltages eleswhere in the circuit are measured relative to "Ground/Zero Volts".
H: Digital multimeter When we measure an electrical quantity on Digital multimeter (DMM) like current, if we dont get readings of small range scale we switch the range utill we get our reading, same goes for voltage. But what is the quantity for which we do not change the scale on DMM and why? AI: ... if we dont get readings of small range scale we switch the range utill we get our reading ... No, you should start on a high range and work down to more sensitive range. If you do it the other way you risk damaging your meter if the voltage or current is higher than expected. But what is the quantity for which we do not change the scale on DMM and why? This part of your question is not clear. It depends what you are measuring If I am checking to see if 12 V or 24 V is present then sometimes I switch to a high range so that the meter display 12 or 24 rather than 11.93 or 24.12 which can be a bit distracting. If I need to know the voltage with some precision then I select the lowest (most sensitive) range that will display the voltage without the "over-range" indication turning on (usually 1--- in the display).
H: Uploading sketch with teensyduino In case I upload a sketch to a teensy board via the Arduino IDE (teensyduino), should any of the TX/RX pins of the teensy be unoccupied? AI: All of the Teensy boards have a dedicated micro USB connector for programming via the Arduino IDE. The USB1_DP and USB1_DN (for the USB port) are the only pins that are reserved, so you can freely use all of the TX/RX pins that are available.
H: Is there difference in connecting these two jack connectors? I'm making a pedal from this tutorial https://www.instructables.com/id/Overdrive-Pedal/ and the author is using this type of jack female connector: But I already bought this type of connector: The author's connector has 3 tabs to connect, and in step 13 he said: "Connect the black wire from the 9v battery snap to the remaining unused tab on the stereo jack." But I'm an amateur and I do not know how this connector works. I already used the 2 tabs to connect other wires, and there's no tab left. Is there difference to connect all wires to the same tab, or are these tabs only a method to organize my wires better? Will this connector still work (in my case) when the connector has only one tab and I connect all wires in the same tab? AI: Stereo jack connectors are often used in guitar/bass pedals to "enable" the pedal when a mono jack cable is connected, let me explain it with an image: The Sleeve is in contact with the ground part of the jack cable The Tip is in contact with the "tip" part of the jack, carrying the signal of the instrument The Ring is the extra tap necessary to "enable" the pedal, it is only present on stereo connectors. When the jack is plugged, the sleeve and the ring are shorted together (the jack cable is a mono one). When unplugged, they are not connected anymore. This mechanism is used as a battery switch, to save the battery when the pedal is unused. The stereo jack will allow you to save the battery of your pedal. If you don't have one, or you don't want to buy one, use a switch, connected between the the minus of your battery and the ground of the circuit. You can still connect directly the ground and the minus of the battery together on your mono jack connector sleeve, but I strongly recommend you to remove the battery of your pedal when you don't use it, it won't last long.
H: Why often is used a varistor 7D471K for 220-240vac surge protection rather than a lower nominal voltage one? In my "supposedly" surge protected power stripe there is a varistor for surge protection configured to simply short-circuit the entrance wires in case of surge. Checking the model it's a MOV-7D471K that on Mouser is reported having the following specs: Voltage Rating DC: 517 VDC Clamping Voltage: 775 V Diameter: 7 mm Peak Surge Current: 1.2 kA Surge Energy Rating: 30 J Capacitance: 105 pF This has confused me since these values seems to me too high considering that is supposed to protect a 220-240vac max line (European domestic distribution range), and searching on web seems that is often used this MOV for domestic surge protection. I have seen also that MOV-10D221K seems having values more near to 220-240vac: Voltage Rating DC: 242 VDC Clamping Voltage: 360 V Diameter: 10 mm Peak Surge Current: 2.5 kA Surge Energy Rating: 32 J Capacitance: 450 pF Why is used MOV-7D471K rather than MOV-10D221K (or some other, suggestion are welcome)? Is my power stripe bad configured or I'm missing something? AI: You must be reading the specs wrong. A 10D221K would blow up when connected to 240VAC mains as mains have about 340V peak voltage, and so something else must be used. 10D221K can handle max 140 V RMS or 180V DC, and starts conducting 1mA at voltages between 198 and 242 V. It will clamp to 360 V, when 25 A is flowing. It just is not compatible with 240 VAC mains. 7D471K can candle max 300V RMS or 385 VDC, starts conducting 1 mA between 423 and 517 V, can clamp to 775 V when 10 A is flowing. No problem at 240 VAC mains.
H: Why doesn't the voltage of the load fluctuate in a Zener Diode circuit? I'm very new to this field so sorry for this silly question. What I've learnt is that when any number of devices are connected in parallel (and none in series with the battery) with the battery, the potential difference across all the devices is equal to that of the battery. If 2 resistors are connected in series in one 'line' of the parallel circuit, how the potential is spent by the two resistors, their values of resistances, etc. shouldn't affect the voltage of the other 'lines' of the parallel circuit. So when the Zener diode and the load are connected in parallel with the voltage source, whether the voltage is constant in the diode or not shouldn't affect the load from having the same potential difference as the voltage source right? And yet, even though parallelly connected with the load, the diode manages to affect the voltage of the load. How is this accomplished? P.S - I may have misused some terms so sorry in advance :) AI: When using a Zener diode in parallel to regulate voltage, you do not connect the Zener directly to the voltage source. Instead, you connect it with a resistor (or other circuitry) capable of handling the voltage drop and current required. See this example circuit, where \$V_z\$ is the Zener voltage: simulate this circuit – Schematic created using CircuitLab The voltage across \$R_z\$ will be \$(Vcc-V_z)\$, while the voltage across \$R_{load}\$ will be \$V_z\$. The power rating of \$R_z\$ should be at least \$\frac{(Vcc-V_z)^2}{R_z}\$ You are correct in the statement that connecting components in parallel to an (ideal) voltage source results in that same voltage across each parallel branch. But when connecting things in series with a voltage source before parallel branches, then this no longer applies.
H: Transmission Line with Short Circuit and Inductor as Loads let's consider this situation in which the two ports of a transmission line are connected to a short circuit and to an inductor: The transmission line is supposed to be lossless. My book makes a mathematical analysis in order to find the resonance frequency (precisely, resonance frequencies) of this network, which may start to oscillate. But my question is: how can it start to oscillate on its own (since it is a passive device)? I have understood that there are no losses, but I think it is necessarily a form of excitement or accumulated energy in the circuit (for example, to make an LC circuit oscillate, the capacity must be pre-charged at the time of closing the switch). Oscillation means voltage + current and so energy. And it cannot create energy from 0. AI: how can it start to oscillate on its own It will not oscillate in steady state (assuming either the lines or the inductor are real devices with some loss). If the inductor is initially charged (by some other circuit, not shown in your diagram), it may oscillate for some time until the stored energy is dispersed through the loss mechanisms. to make an LC circuit oscillate, the capacity must be pre-charged at the time of closing the switch Not true. You could leave the capacitor uncharged, and pre-charge the inductor (with an initial current) instead. Oscillation means voltage + current and so energy. And it cannot create energy from 0. Correct. This circuit will not start oscillating on its own. It needs some other way to obtain an initial stored energy before it can oscillate. In the real world (no lossless components) it will (like any other oscillator) need continuous energy input to maintain a steady state oscillation.
H: What will happen to TLV1117 (5V / 3.3V) from reverse voltage? There is a step-down DC / DC converter that receives 5V input and 3.3V output It powers the Atmega328P microcontroller. TLV1117 TI For the firmware of the controller, I will use the USBASP programmer. It connects to the ISP connector (MISO / MOSI / SCK / RST / 3.3V / GND). During programming, the 3.3V line will be powered from the programmer. What will happen in this case with the down converter turned off? Power will be supplied from the bottom of the inverter. Is it necessary to put a diode on the 3.3V output? AI: Is it necessary to put a diode on the 3.3V output? No. The regulator output will draw a few mA when voltage is back-fed into it from the USBASP, but this won't harm it. However it could damaged if there is a large amount of uncharged input filter capacitance. To prevent this you should disconnect the power supply unit when programming, and only have a small capacitor (eg. 10uF) close to the regulator input. A diode (preferably Schottky type for low voltage drop) between the power supply and regulator (before the 10uF capacitor) will also help to isolate it from the power supply capacitance, as well as protecting it from reverse voltage.
H: How to identify branches of root locus from figure/plot? How can we identify the branches in a root locus For example How many branches do we have in this below root locus(i feel from its appearance to have one branch) And how many branches in this below root locus(i feel from its appearance to have two branches) AI: The number of branches will be equal your number of poles. The different colored lines should indicate each branch (unless you're colorblind, the difficulty of interpretation might be challenging). When two poles collide with each other (departure), they will go off to try to meet with a zero (arrival). The trajectory to the pole would be a different subject but that part is called the "angle of departure". Once it arrives to a zero, that's called "angle of arrival". If there are no zeros around that pole, it could go off into infinity depending on your value \$K\$. Each of these departures is considered a path. (I could be missing something here as it's been a while since I've done a root locus.)
H: Doubt in SR Flip-flop I am a beginner in digital logic. Please clarify me the following In case of set condition with Qn=0, i am getting wrong answer Q_{n+1} and (Q_{n+1})' not complement to each other I am not able to find out where i made mistake. Kindly have a look at the following AI: Qn=0 has changed into Qn+1=1. Once you propagate Qn+1 to the lower gate and reevaluate it's output, you'll see that there's no contradiction.
H: What is this component, labelled as "RT" on the PCB? Can someone tell me the name of the below component? It is labelled as RT on the PCB: AI: This should be a Littelfuse SMD resettable fuse. The miniSMD series provides surface mount overcurrent protection for applications where space is at a premium and resettable protection is desired. https://www.littelfuse.com/products/polyswitch-resettable-ptcs/surface-mount/minismd.aspx
H: Switching between 2 power sources using just a switch So, recently while making a project, I thought of a way to switch between 2 power sources just using a single 3 pin slide switch. Would something like the image below work? Where I connect a common ground to the middle pin and 1 power source to each other pin. AI: As drawn, the commmon terminal of the switch will be powered by either "Power Source 1" or "Power Source 2" -- so connect the common terminal to the positive terminal of the load, not GND. Circuitlab doesn't have a slide switch symbol, I'm using SW1 (a SPDT switch) to represent your slide switch. The center terminal of your slide switch is the common terminal of SW1: simulate this circuit – Schematic created using CircuitLab The generic load is represented by resistor R1 and LED D1, and the two power sources are represented by V1 and V2. Note that both power sources share a common ground return from the load.
H: Voltage drop in a conductor My textbook, Practical Electronics for Inventors, Fourth Edition, by Scherz and Monk, says the following in section 2.3.1 The Mechanisms of Voltage: In regard to potential energies of free electrons within the conductors leading to and from the battery, we assume all electrons within the same conductor have the same potential energy. This assumes that there is no voltage difference between points in the same conductor. For example, if you take a voltmeter and place it between any two points of a single conductor, it will measure 0 V. (See Fig. 2.8.) For practical purposes, we accept this as true. However, in reality it isn't. There is a slight voltage drop through a conductor, and if we had a voltmeter that was extremely accurate we might measure a voltage drop of 0.00001 V or so, depending on the length of the conductor, current flow, and conductor material type. This is attributed to internal resistance within conductors - a topic we'll cover in a moment. I can't help but wonder if the authors are understating the potential for voltage drop through a conductor. Their estimation of a 0.00001 V drop seems reasonable for a relatively small electrical circuit, but what about for large-scale power systems, where the conductor may span many miles (say, an undersea cable, an electrical grid (although, I assume that electrical grids have hardware in place to "boost" the voltage between power plants and delivery destinations), a solar farm in the desert, etc.)? In my, perhaps naive, mind, I wonder if the voltage drop would not just be minor in such situations, but significant enough to cause practical problems? I would appreciate clarification on this. Thank you. AI: What's your conductor? Copper? Aluminum? The conductivity of these materials is well known. Copper is \$5.96\times 10^7 {\rm \frac{S}{m}}\$. It varies a little depending on temperature and how the metal was worked. The resistance of a round wire is easy to calculate from the conductivity of the material, \$R=\frac{l}{A\sigma}\$, where \$A\$ is the cross-sectional area of the wire, \$l\$ is the length of the wire, and \$\sigma\$ is the conductivity of the material. So for any given length and diameter of round copper wire, you can calculate the resistance. Let's say 1 km of 10 mm diameter. Then you have $$R=\frac{(1000\ {\rm m})}{\pi (0.005 {\rm m})^2(5.96\times 10^7\ {\rm\frac{S}{m}})}=0.21\ \Omega$$ And the voltage drop along the wire is given by Ohm's law: \$V=IR\$. So if you have 1 A flowing through this wire, 0.21 V drop. If you have 100 A flowing through this wire, 21 V drop. Whether that's significant or not depends on the voltage being used. 21 V drop on a 50 kV transmission line is pretty small potatoes. There's a lot more to it than this, like aluminum is a worse conductor on a volume basis, but generally better on a dollar basis; and skin effect increases the effective resistance if you're using high frequency AC; and in an AC power network you can use transformers to trade off the operating voltage with the current while still delivering the same power to the load; etc. Put this all together and the answer is, yes, it can be an issue. The guys who design power transmission lines have to choose the wire diameter (in relation to all the other system parameters and requirements) to avoid it being a problem.
H: PCB Design - How much length can I2C lines differ by? I am designing a PCB that uses I2C communication and was wondering how much length the SCL and SDA lines can differ by. EDIT: Clock frequency can be anything from 95kHz to 400kHz. Is there a standard? EDIT 2: Pull ups are 10kΩ. Is it an issue if the pull up is high? and what if there are multiple pull -ups on the SCL and SDA lines? AI: For any distance over which I2C is a viable means of communication, and certainly within a single PCB, there is no need for any trace length matching constraint between SCL and SDA. It won't have any noticeable effect on the signal integrity or timing margins.
H: Confusion in understanding characteristic impedance and input impedance of an open circuited transmission line The characteristic impedance Zo can be derived by considering LC sections throughout the line with open circuit at the end of the line, and this value is independent of length of the transmission line but when we calculate the input impedance for the line of finite length(l) with open circuit at the end of the line, we get Zo * coth(gamma x l). Since characteristic impedance is independent of length, we get same value everywhere but when we calculate the input impedance for open circuit, we get different value. I am unable to understand why is that happening? Since I believe both are impedances and can be calculated in a similar fashion, so the value in both the case should be same. AI: If the transmission line is either infinitely long or terminated with the characteristic impedance, there are no reflections back to input port so input impedance equals characteristic impedance. If you have reflections, like in the case of leaving transmission line end open, signal reflects back from the end to input. So the signal being fed to input port not only drives the transmission line but it has to drive against the reflected wave too. Phase of reflected wave depends on time travelled on the line and thus the length of the line, so this is why the input port appears to have different impedance based on transmission line length.
H: Max4080 REF pins Bought the Max4080 module and as usual I read the datasheet after purchasing it. In the datasheet pin 6 and 7 are NC but on the module those two pins are connected as F1B and F1A: What F1B and F1A pins is for? Max4080 SASA means it has gain of 60? AI: These pins are only used for the MAX4081. They used the same dev board for MAX 4080/4081. If you work with the MAX4080 don't care about these two pins. Yes, MAX4080 SASA has a gain of 60 in an SO8 package.
H: can not get required gain with gilbert cell on ltspice I designed a very basic Gilbert cell using BJT's. The function is a DSB modulator where one input is a 2.5mvpp sine wave, and the other a 5vpp square wave. The current at the bottom is a simple current source with a biasing emitter resistor. I tried changing the current from 10 to 200 mA and also the two resistors at the top to no avail of making the modulated signal higher than 25uv. What is the problem? The gain should be merely 10. AI: You "blindly" connected square wave sources (V3, V4) to the inputs. The problem is that you do not define a proper commonmode level. I would try to make the commonmode voltage of V3 something around +10 V and for V3 use +5 V. ALWAYS first make the DC solution in order (check the biasing of each transistor) before applying signals to it. Applying proper biasing is essential and a step that you missed! Here's how I make a differential signal with a properly defined commonmode level: simulate this circuit – Schematic created using CircuitLab Vcmm sets the commonmode voltage Vin is the input voltage, note that if I want a 1 Vpeak-peak differential signal that I need to give Vin an amplitude of 0.5 V (not 1 V)
H: What does \$x^*(t)=x(t)\$ mean in signals and systems? I am often seeing this notation \$x^*(t)=x(t)\$ or similar but I cannot remember when I saw it the first time and I cannot find anywhere that explains the meaning of that notation. What does it mean? AI: \$x^*(t)\$ is the complex conjugate of \$x(t)\$ So, when \$x(t)\$ is defined as $$x(t)=a+bj$$ then $$x^*(t)=a-bj$$. When $$x^*(t)=x(t)$$ it means $$a+bj=a-bj$$ This is only true when \$b=0\$, so \$x(t)\$ has to be a real number.
H: Two different input in a buck stage? I am designing a power stage using LMZM23601V3. In normal condition there is no 5VUSB but only +15V, during programming there is no +15V and just 5V from usb, I want to protect the only component using 5V from +15V and that's why I am using a diode. Is it a good approach or there are others solutions? Thanks AI: Your selection of the SS26FL looks like a good choice for isolating the 5VUSB from the other +15V input power rail. This is a 1A Schottky diode that looks well suited to the application. This method of isolation is pretty standard and very simple to implement. The use of a Schottky diode is useful because its reduced voltage drop versus some other regular silicon diode allows a higher voltage to the voltage regulator in USB mode. This reduces power loss in the diode and will lend a small performance increase to the switching converter chip. I also looked at the data sheet for your LMZM23601V3 chip and see that it is also rated for 1A output so the above diode seems to be a good match as well since the average input current to the regulator will be less than the output current in this step down application. If you have not yet used TI's online switching voltage regulator design tool you should take a look at that. It can give you strong recommendations on proper capacitor types and values for your input and output caps.
H: Buck inductor current ratings we are designing a buck using MIC23350 to generate 1.8V @ 2A. Input - 2.5 to 5.5V - typical is 3.3V or 5V. Switching frequency- ~1100khz. Nominal current is ~0.5A, Max current can be 30% higher. As per inductance calculation, with 30% ripple factor, Min inductance value ~ 1.8uH (Vin = 5V, Vout 1.8V, Iout = 2A, ripple =0.6A, frequnecy =1.1Mhz) Peak currents are ~2.5A, if i take only Peak-peak inductor current and rms current i am getting smaller inductor sizes (3.5mmx3.2mm). I see app notes saying inductor saturation currents should consider Current limit of the IC If i consider that i should select an inductor with Isat>6.5A which will increase the size of inductor(currently i selected inductor with Isat = 8A). Due to size concerns if i go with Inductor having saturation current 4A, assume current limit condition came maximum current reaches 6.5A I can understand Inductor value will reduce can this damage the inductor, If i go ahead with small size inductor(no consideration given to current limit of IC), what worst case affects can happen to design. AI: When the top switch is ON, the inductor has a constant voltage \$ E \$ across it, so its current will increase linearly with time according to \$ di/dt = E/L\$, which results in the usual triangular inductor current waveform. When core material gets close to saturation, inductance decreases, which means \$ di/dt \$ increases. This results in a current waveform which is no longer triangular, but "spiky". Once saturation is reached, current "runs away" quickly. Saturation is visible in the scope shot from this article : This is a situation you want to avoid in normal use for your rated load current, because it will increase losses. However, as you noticed, you have a compromise to make. Even though during normal use inductor current will peak at 2.5A, which means a small inductor saturating above this current would be suitable, current may go much higher in case of a short circuit on the output, or during startup if the load has lots of capacitance and the soft-start ramp-up is too fast. In this case, the DC-DC chip will go up to its internal current limit, but once this limit is exceeded, it does take a certain amount of time to turn off the FET. Since the saturated inductor has much lower inductance, current increases much faster than a calculation based on the original inductor value, so by the time the FET is being turned off, current may have reached unsafe levels. If the top FET fails shorted, input voltage will be connected to output so the rest of the board will get overvoltage and fry. Unless the bottom FET also fails shorted, in which case the chip will short the supply and test its current limiting. Fortunately (see paragraph 4.14 in datasheet) this particular chip has a clever protection which will sleep for 1ms after 8 cycles tripping the overcurrent detection, and if this goes on it will eventually shut down. So, you don't have to overdesign to avoid overheating in a continuous short, all you have to do is avoid destroying the FET after 8 cycles. Saturation is not an ON/OFF phenomenon, rather it is a gradual phenomenon. Some core materials have a soft "knee" in the inductance vs current curve, and others have a harder knee. So, since you want a small inductor, I'd go with one which begins to saturate a bit below the overcurrent limit (say, 4A) but still keeps at least some inductance up to 7A. Maybe a soft-saturation although I've never used these. Another option is to lower the current limit, or pick a chip which has an adjustable current limit.
H: Why voltage regulators instead of voltage dividers for supplying power to loads? I have to supply 80 Raspberry Pi's with power, so I began looking into it and found a power supply of 12V 20A (might not be enough current to supply 10 Pi's that typically draw 1.5A - 2A though). This meant that I would need to drop 12V to 5V, which I was originally planning to do using resistors for a voltage divider. However with a bit of brief researching I found you should never use voltage dividers as a power supply to loads due to the power that needs to be dissipated. So I discovered using a voltage regulator was the way to go, however what I don't understand is, if a schematic of a voltage regulator component shows it is comprised of many resistors and transistors wouldn't this suffer the exact same problems as a voltage divider? Wouldn't the resistors internally burn up due to the power dissipation? Or is it due to the heat sink attached to the component, which would mean hypothetically if you could attach a heat sink to a resistor it'd be fine to use a voltage divider to supply a load? The reason I ask is trying to work out the best way to efficiently and safely supply 5V at 1-2A to all 80 Pi's I'm aware a few computers is a smarter option however the project involves 80 Pi's and that'sthe way it must remain AI: No [it's not a duplicate of "When would I use a voltage regulator vs voltage divider?"] because I know why they are used differently and their application, I wanted to know the physics of why the resistors inside the [linear voltage regulator] component wouldn't burn in comparison to a voltage divider. OK, I think I see the question you are asking, and the answer is fairly simple: With a voltage divider, comprising only resistor components (which is typically what people mean when they talk about voltage dividers in this situation) the current for the whole load goes through the "upper" resistor. One of the effects of this (as well as poor regulation) is that the resistor has to be able to dissipate all the heat caused by passing that load current. In this type of circuit, the resistors have to be comparatively low values, to reduce the effect of the load current on the voltage divider's "output" voltage. However using low resistor values increases the overall current flowing through the voltage divider to ground, and so increases the power dissipation in those resistors. Using a linear voltage regulator IC, whether its feedback resistors are external or internal to the voltage regulator itself, the load current does not flow through those feedback resistors. Instead, the load current goes through what is called a "pass element" e.g. a transistor. This difference means that the feedback resistors for a linear voltage regulator (and I'm addressing just your question above, about the resistors) only dissipate a small power since they only pass a tiny current, which is not related to the current required by the load. Those feedback resistors can be comparatively much higher in value, than the resistors in a "simple resistor-only voltage divider". For example, in page 1 of this datasheet for the old Signetics 7800 series, R19 and R20 are the feedback resistors (shown as 0.25kΩ + 5kΩ) so the current through them is just under 1mA at 5V output. The point is that this small current through those resistors stays approximately constant (and so does their power dissipation), no matter what the load current is. (There is also this interesting webpage from Ken Shirriff, where he reverse-engineers a 7805 regulator. On that 7805 schematic, the feedback resistor divider is labelled R20 + R21.) The pass element (e.g. BJT or FET) in a linear voltage regulator behaves like a variable resistor, under the control of an "error amplifier" (see below) and dissipates the same amount of power as the "upper resistor" in the equivalent voltage divider scenario. Wouldn't the resistors [inside the linear voltage regulator] burn up due to the power dissipation? No, it's the pass element (e.g. BJT or FET) which can dissipate significant power (and is designed for this, with heatsinking added by the circuit designer where necessary) - not the feedback resistors for the linear regulator, which don't dissipate enough power to "burn up". That pass element can be internal to a linear voltage regulator IC (typical these days), or external to it, or a combination of both, depending on the regulator IC and the circuit designer's choices. In case it helps to see it, here is a block diagram of one type of linear voltage regulator. The load is connected to the VO terminals: (Image source: From "Figure 1 LDO block diagram" of Linear Low Dropout Voltage Regulators, from Analog Devices ADALM1000 Active Learning Module) The series pass element (in the diagram above, it's a P-Channel MOSFET) still dissipates a power related to the load current (P = (VI - VO)·IO approximately). The feedback resistors are termed "Sampling Resistors" in that diagram. As I explained, the load current IO does not flow through those sampling (feedback) resistors. The "Error Amplifier" (measuring the difference between the reference voltage VR and VS which is the output voltage via the divider formed by sampling / feedback resistors R1 and R2) varies the effective resistance of the pass element, as the output voltage (and therefore VS) changes (whereas the reference voltage VREF and therefore VR, would be stable in an ideal regulator). Does that explain what I think you are looking for in the question above, about why the resistors in a "pure resistor voltage divider" get hotter than the feedback resistors in a linear voltage regulator? As the question has developed after I originally posted this answer, it's clear that a good approach to the whole problem is unlikely to involve a linear voltage regulator (or pure resistor voltage divider) at all. Instead, it may involve a buck-mode switching regulator (e.g. 12V to 5V) - perhaps several of them (e.g. one per RPi, or per several RPi boards). There are advantages & disadvantages of using one or more 12V PSUs (and additional buck regulators down to 5V) or using one or more 5V PSUs, depending on various factors (e.g. voltage drop over the DC power cabling). This has been explained in another answer.
H: Esp 8266 - 01 module unable to flash or communicate I have an ESP8266 module which is not communicating with the Arduino IDE serial monitor unless I press the rest button on the ESP setup. I tried erasing and reflashing the ESP8266 several times, but it doesn't work or respond to my AT commands from the Serial monitor. At 74880 baud rate it does show something everytime I reset the ESP manually, I want to know step by step how to properly flash the firmware, at get the AT commands to work. I am using the ESP8266-01 model with 26MHz clock and 8Mbit flash, I have connected Arduino Tx to ESP Tx I have connected Arduino Rx to ESP Rx I have connected GPIO-0 to Gnd while programming. I have supplied power from the Arduino 3.3v Also I have connected CH_PD to 3.3v Sometimes at 115200 baud I get fatal exception(0) and some errors, but that was before I erased the flash. AI: Well, first you have to make sure that the connections are proper: Since PC is talking to ESP8266 via Serial Terminal of Arduino: Rx --> Rx , Tx --> Tx thru 3.3 V voltage divider as ESP8266 is 3.3 V device while serial voltages are 5V from Arduino FTDI. CHPD, RST --> 3.3 V, GPIO0, GND --> GND Much better to use external supply (other than 3.3 V of Arduino) for ESP8266 as it is power hungry at times. Make sure all ground connections are correct. My ESP8266 has two LEDs. I guess yours have too. Check: If Blue LED of ESP8266 blinked 2-3 times quickly and went OFF. If Red LED lighted up and stays ON. if it does, your ESP8266 chip is working fine. Good for you! If the Blue LED didnt blink or if it stays ON, Hang on! Your firmware may be corrupt and you will need to re-flash the firmware of ESP8266.here's one method I used in past If everything is okay by now, go ahead. Finding baudrate: You can find the default baudrate of ESP8266 by trial and error. Send 'AT' command in different baudrates until you get 'OK' response in the Serial terminal. You can then configure baudrate to the required value. Go thru AT commands
H: I2C pull-up to multiple voltage question Quick rundown of what's going on: I am using two PIC32MX to both be masters of I2C. They will be remotely controlled to determine who is the master at any given moment. Due to redundancy purposes, they both have their own 3.3V regulator sourced by the same 5V. My original design, I was pulling up the I2C lines to the local 5V. Now I'm thinking I should pull up to 3.3V instead. However, I'm not entirely sure what kind of consequence I might see if I pull up the lines to their respective 3.3V. They are connected to the same I2C bus. Questions: Am I right to worry or is there actually nothing to worry about? Should I only have one set of pull-ups because they'll act in parallel? Am I better off just pulling up to local 5V. The PIC I2C lines are 5V tolerant, but it's a 3.3V device. AI: You only need one pair of pull-up resistors for entire bus. Adding them at each node will create too strong pull-up. This, consequently, answers your other concerns - you can connect pull-ups to any 3.3V source and be OK. However, there is a caveat - most MCU and other I2C devices have maximum pin voltage specified relative to VCC, e.g. "VCC + 0.5V". This means that if any device on the bus loses its power for some reason its pins will immediately go beyond maximum allowed range. The pull-up resistors will limit the current, hopefully preventing burn-outs. But for maximum reliability I'd suggest using one big 3.3V supply to power all devices on the bus and provide voltage for pull-ups too.
H: Kicad: How to give nets / wires multiple names I'm designing a switching power supply using a BD9G101G step-down regulator. The datasheet is here: http://rohmfs.rohm.com/en/products/databook/datasheet/ic/power/switching_regulator/bd9g101g-e.pdf. If you look on page 18 there is a simplified board layout (fig. 47). There are two areas which need special consideration: SGND and POWERGND. Logical they are all on GND level but need some special routing. It would be very helpfull if I could somehow name the nets in Kicad schematic/pcbnew accordingly. I tried designing some kind of net-seperator symbol (which works) but I can't find/make a footprint which I can place on the layout without getting trouble. So what would be a way to do this? EDIT: Removed screenshot because it was leading to far away from the actual question. SOLUTION: Francois was first to mention net-ties (and why not to use them). But special thanks to Dennis :) AI: Alright so the figure you are referring to is a bit confusing. Rohm isn't actually requiring 2 separate grounds on your board but pointing out which ground shape is used for switching current return path (POWER GND) and which is intended to be "low-noise" ground for feedback node (SGND). In your design, these 2 shapes should refer to one common ground (GND) and be connected using the vias shown in figure and back-plane (not shown). The reason why they have annotated them differently is that the SGND and PGND shape are actually two separate shapes on the layer shown (top) and that PGND currents shouldn't try to return through SGND as there is no path out. In most application, it is very highly recommended to use one unique return path (= GND) for all currents, else you end-up creating unnecessary ground return loops which could cause a lot of headaches for other circuits (you do not want to go down this path). The only times you may consider splitting the ground return paths could be, for instance, in very specific application where you have a highly sensitive/low-noise circuit (example: analog, audio or RF circuit) and a high-risk of exposure to ground noise/bounce. In this case, you can create a return path for your general and higher-current ground return to avoid the low-noise circuit and tie the 2 grounds closer to the source (eg. power connector in most case) using a net tie or jumper resistor. Doing this, you are basically forcing the return currents to follow a defined path, however it takes much more effort to implement as you have to consider all potential signals shared between the low-noise circuit and the rest of the board and be able to keep references when jumping from one ground to the other (which could potentially void the ground separation if not done with ultra-care), this is even more important for high-speed signals (example: SPI / I2S). It's really a big mess to separate ground return paths, only do it if you really have to and know what you are doing. To come back to your initial question, I am only aware of 2 options to tie 2 nets together which are jumper resistors and net ties. However, do not use them here :)
H: Voltage on a Transmission Line with Ideal Conductors Let's consider a transmission line with Perfect Electric Conductors. We know that if an external AC source is applied, we get a voltage waveform between the conductors which is function of the position (and also of time, but focus on the first dependence). But we know that in a perfect electric conductor the electric field is orthogonal to its surface, and this means that its surface is equipotential. This property is true in any situation (steady state or not), because the tangential electric field is always 0 in a perfect electric conductor. The following picture shows clearly that E is orthogonal to the conductors' surface. enter link description here But this seems in contrast with the fact that the voltage depends on the position. Which is the solution? AI: Remember that when we defined the electrostatic potential difference (aka "voltage"), $$V=-\int \vec{E}\cdot d\vec\ell,$$ we called it the electrostatic potential difference because it is only strictly valid in electrostatics. When we use this concept in AC circuits, we're using it as an approximation only (usually described as the lumped circuit approximation). In particular, in the presence of time-varying magnetic fields, we can't count on this \$V\$ to be independent of the path over which we take the integral. In transmission lines, we are definitely dealing with time-varying magnetic fields, so we can't expect the electrostatic potential difference to be well defined. We define an approximate potential at a point along the transmission line as the negative integral of the electric field from one conductor to the other at that point. But we can't expect to get the same result (because this isn't an electrostatics situation) if we take an integral from some point on the first conductor, lengthwise along the conductor (with contribution 0, because the material is p.e.c.), across the gap at another location, and then back along the second conductor (again contributing 0) to the point opposite where we started. Meaning, if we take the integral to calculate the "voltage" across the transmission line at position \$z=0\$, we can't expect this integral to be the same at \$z=z_1\$ just because there's no electric field within the conductors.
H: Why does the forward voltage drop in a diode vary slightly when there is a change in the diode current? I was going through Basic Electronics and Circuits by David. A.Bell and there it mentioned that the forward voltage drop in a diode varies slightly with a change in forward current - but there is no reason there as to why it happens. AI: Diodes conduct a current at any voltage across them. It's a continuous curve. However, it's not a straight line as it would be for a resistor. Here are some voltage/current measurements I made a while back Because we're usually interested in 'sensible' values of current, like 0.1mA to 1mA, we often model a diode as a fixed voltage drop. As you can see, over that range it doesn't change much, so it's a good engineering approximation. Notes: How lousy a 3V zener is as a constant voltage reference, compared to all the other non-references. A 1N400x leaks less current at low voltage than a 1N4148, say for protecting your +/-200mV meter input with shunt diodes. Unfortunately, why is a question that, if you're not careful, can go down the rabbit hole of why, explanation, so why explanation, deeper explanation, and so on. Ultimately, all explanations that don't ground in your intuition are what, not why. For instance, why don't we fall through the floor? If your intuition is that atoms are hard billiard balls that stick together, then why is that stuff is made of atoms. If your intuition is that atoms are 99.999% empty space, then there's more explaining to do. Unfortunately, it's a bit tricky to intuit quantum mechanics, so if you're persistent, all why's are going to end up as 'well QM says what', and why QM? It may be thought this answer doesn't address 'why?'. It does, because the answer is 'it does'. When we do the experiment, that's what happens. Then we construct theories to understand what the experiment is telling us about nature. If we model the device as having bandgaps, then we gain some predictive power, and say that model is useful, or even (extrapolating wildly) right or true.
H: Why is the input impedance of an emitter follower defined as \$\Delta V_{B}/\Delta I_{B}\$, as opposed to \$V_{B}/I_{B}\$? In The Art of Electronics, chapter \$2\$:Transistors, page \$65\$, the author defined the input impedance \$r_{in}\$ of the emitter follower below as \$r_{in}=\Delta V_{B}/\Delta I_{B}\$. Why did he use variations of \$V_{B}\$ and \$I_{B}\$? I thought the input impedance was defined as the input voltage \$V_{in}\$over the input current \$I_{in}\$, what am I missing? AI: A more basic answer is that the resistance of the transistor is non-linear, and can vary significantly. For resistors, in general the resistance doesn't change over voltage. More specifically, the slope of the resistance doesn't change. Therefore V/I is dV/dI. For a transistor/diode, there a dynamic resistance that has a highly variable slope. A diode with a forward voltage of 100mV might have a resistance of 1MΩ, while at 1V it might have a resistance of 10Ω (random numbers chosen, but the general magnitudes should be relatively correct). The problem now is that resistance is very non-linear, which we don't like for many reasons. Thus, a small signal model is used, where the input voltage is changed a small amount, and the current is measured at 2 points. This change in current for a given change in voltage (slope) is the roughly the of the resistance at that biasing condition. In essence, resistors have constant slopes for voltage versus current, and voltage over current (slope) is equal to resistance. Transistors have a changing slope, so the resistance must be approximated at a given point with dV/dI.
H: Why Power remains same in a Transformer? If we have a step up transformer then the voltage at the secondary side will be more than the voltage at the primary side. Since we all know that POWER = VOLTAGE * CURRENT and because voltage at the secondary side is now more than the voltage at the primary side. So will it not make the power at the secondary side more that the power at the primary side according to the relation P=VI ? Then why is it said that power will remain same at the both side of the transformer when voltage on both sides is not same ? AI: $$ P_{OUT} = P_{IN} - losses $$ Ignoring the losses: $$ P_{OUT} = P_{IN}$$ $$ V_{OUT}I_{OUT} = V_{IN}I_{IN}$$ Rearranging: $$ I_{OUT} = \frac {V_{IN}}{V_{OUT}}I_{IN}$$ So, for a step up transformer where VOUT > VIN the output current is less than the input current by the voltage step-up ratio. From the comments: So if the voltage increases on secondary side then the current shouldnt increase also on secondary side ? doesnt the mathematical expression violates the ohm law in this case ? simulate this circuit – Schematic created using CircuitLab Figure 1. Two scenarios. In (a) the current will be \$ \frac {V}{R} = \frac {120}{120} = 1 \ \text A \$. In (a) the power in R1 will be \$ P = VI = 120 \times 1 = 120 \ \text W \$. In (a) the primary current will be 1 A. If we use the same value resistor for R2 with double the voltage then we have situation (b). In (b) the current will be \$ \frac {V}{R} = \frac {240}{120} = 2 \ \text A \$. In (b) the power in R2 will be \$ P = VI = 240 \times 2 = 480 \ \text W \$. In (b) the primary power will have to be at least 480 W. In (b) the primary current will have to be \$ I = \frac {P}{V} = \frac {480}{120} = 4 \ \text A \$. The main thing you need to realise from the answers to your question is that Ohm's law applies to the resistors R1 and R2, and not to the transformers! In (b) the primary current will have to be 4 A ? How the primary side will manage to make the primary current high? Transformers transform voltages and currents but this has the effect of transforming impedances too. impedance /ɪmˈpiːd(ə)ns/ noun, the effective resistance of an electric circuit or component to alternating current, arising from the combined effects of ohmic resistance and reactance. So you can think of an impedance as resistance to AC. The impedance seen looking into the primary in your step-up transformer will be \$ \frac {1}{n^2}Z_L \$, where n is the voltage step-up ratio. So with R2 = 120 Ω the mains supply will see a load of \$ \frac {1}{2^2}120 = 30 \ \Omega \$. Therefore the current will be 4 A. Note that if R2 is removed and the transformer is open-circuit that no current will be drawn by the primary. (In real transformers there will be a little magnetizing current.)
H: 1000uF capacitor to protect 5v 30amp circuit I wish to build a LED cube but have no experience in shift registers so I am building one with addressable LEDs that can draw 60mA each. I want to make an 8x8x8 cube so that would be 512 LEDs and 30.72 possible amps of current or above 150 watts of power. These WS2811 "Neopixels" are very sensitive to voltage changes and require a 1000uF capacitor to protect them according to the Neopixel guide. Is it safe to put a regular capacitor across the power to even out voltage spikes on such a high current circuit?. AI: No, it's not necessarily safe because capacitors appears as very low impedance when uncharged and first powered up. That initial current surge can be too high for your wiring or power supply to handle. Things can overheat, or fail, or your power supply can get stuck in a startup-loop where the protective fail-safe continually kicks in. The simplest way to reduce this current surge is to place a NTC in series with the capacitor and load. It has high resistance when cool and lower resistance when warmer so it is basically a resistor that somewhat removes itself from a circuit as you run it. That said, I don't see how it's any more sensitive to overvoltage than any other LED. How are you planning on powering them? More effort should be put into building a better regulator for them than slapping a massive decoupling capacitor across the rails.
H: Will this setup trip RCD or cause any issue? I want to mix a function gen’s output with a 50Hz mains signal which is stepped down by a 24:1 transformer as follows: Is this setup safe? A scope will monitor the composite signal. AI: Live and neutral are completely isolated from the measurement circuit. What comes in on the live will return on the neutral so there will be no RCD current imbalance and it will not trip.
H: Orientation of loop antenna (receiving mode) I have done some measurements through a loop antenna similar to that shown in the following picture (all the information included does not refer to my antenna, it is a picture used only to show its geometry): My professor has told me that this antenna was sensitive to the electric field orthogonal to the surface of the loop. I have the following questions: Is this true for any loop antenna? Or does it depend on the specific model I use? How can I understand which the direction us of the electric field which is "caught" by an antenna? I thought to use the radiation pattern, but then I thought that it refers to how it catches power, while I want some information about the electric field. AI: Since the EM waves are radial to line of sight and never longitudinal, it is the viewed effective aperture geometry of the antenna relative to \$\lambda /2\$ that permits matching the impedance or transferring RF power to a load. Thus when you see no aperture rotated by 90 deg now inline with line of sight, you can expect a null in the aperture and signal and thus only get fringe signals from reflections or nothing. This also applies to a dipole. Staggered dipoles in parallel are called a Yagi antenna. The geometric ratio of conductor, l and gap, g of the return current determines the cable impedance. e.g. 200 vs 75 vs 50 Ohm as well as the Dk between them. If signals are broadside to the antenna then the radial EM fields can induce a longitudinal current.
H: Why do large power stations' generators rotate at ≥ 1800 RPM? Some time ago, over some beers with a fellow EE who's much more experienced than me, we were discussing the unique challenges present in utility-scale power generators, like the ones in nuclear power plants. A single generator can be in excess of 1000MWe, and has all the more exotic stuff like hydrogen cooling, etc. In passing, my friend told me At least they don't have to spin at 3000 RPM, they have many poles on the stator to reduce the need for rotor speed... Which I thought was a good solution to an engineering problem. It seems, though, that they don't actually use it often. When reading about the Chernobyl disaster, I found that the RBMK reactors, and indeed even the modern VVER varieties, use generators spun at 3000 RPM. Excerpt from Wikipedia: The turbine and the generator rotors are mounted on the same shaft; the combined weight of the rotors is almost 200 t (220 short tons) and their nominal rotational speed is 3000 rpm. Also, from the Chernobyl NPP page: The Kharkiv turbine plant later developed a new version of the turbine, K-500-65/3000-2, in an attempt to reduce use of valuable metal. The Chernobyl plant was equipped with both types of turbines; block 4 had the newer ones. The newer turbines, however, turned out to be more sensitive to their operating parameters, and their bearings had frequent problems with vibrations. Indeed, on the night of the Chernobyl disaster there actually were two tests being done about the same time: the famous turbine-rundown test we all know about, and a quite obscure vibration measurement test (it's not widely described, but search for "vibration" here if interested). Another data point: the only time I went on an inside tour of a working NPP (a VVER design) I noticed that one of their instruments in the control room was sometimes dipping into the redline area, and when I asked, we were told it's the vibrations indicator. So all in all it seems that vibrations could be quite the hassle, and one has to wonder why they don't utilize the half- or one-third-speed approach, which will allow for 1500 or 1000 RPM on 50Hz grids. It's evidently done on 60 Hz grids, for example the Bruce NPP's generator runs at 1800 RPM, and GE offers both half-speed and full-speed generators (e.g. see here, pg. 22 and 23). Question So the technology to use lower generator speeds is there, but why is it only used for halving the speed, why not divide further? What are the downsides of lower speeds? The question may be not appropriate for this StackExchange, as the reason may be purely materials/mechanical issue (less RPM needs higher torque, ...). I'm interested if there are EE reasons that make the higher RPM desirable. AI: Steam turbines are more efficient at higher speeds. If we are talking about hydro turbines, this approach works, and at hydropower plant you can meet 48-pair poles turbine running at 62.5RPM.
H: What is name of semi-cut headers in KiCAD? I'm looking for a "semi-cut headers" footprint in KiCAD. Saying "semi-cut header" I mean a footprint which you can use either for soldering traditional pins or solder your board on top of other board. Like on this image: The questions are: What is the proper name for them? Thanks to '@alex.forencich' and '@Tony Stewart Sunnyskyguy EE75' these parts are called castellated edges, castellation, castellation terminal,castellated mounting holes, castellated vias or plated half-holes. I also found good instructions on the dimensions requirements of those here. Are there any footprints in KiCad for that (and where to find them)? No, they don't exist as ready-to-go solutions! I did find only general design suggestions here Thanks! AI: These are "castellated edges" https://github.com/coddingtonbear/kicad-castellated-breakouts https://docs.oshpark.com/tips+tricks/castellation/ There may be better resources.
H: What would be the output voltage of the full wave rectifier output when center-tap is removed? This is the circuit I have been analyzing. My intention is to calculate the output voltage at the resistance RL when the center tap is removed. With center tap connection, output voltage VRL(DC) is calculated is given below, When center-tap is removed, my understanding, Secondary output voltage will be the sine wave with peak voltage = 1.414 * 12V = 16.968 V. At the positive voltage diode D3 will conduct and negative cycle D2 will conduct. So each output would be equivalent of half-wave rectifier. Output (VRL(DC)) voltage (DVM reading) would be = 16.968 V - 0.7 V = 16.268 V ( Note: The reason I am taking the peak voltage as the output voltage is due to the high output capacitance (470 µF). That is why I am not using the average voltage equation Vp/π) Can you please review the output voltage calculation when the center-tap is removed? AI: The centre tap is a low impedance source coupled to the primary low impedance. The load R's are fairly high impedance. When the caps and load R's equal, there is no difference. To keep the voltages balanced with a mismatched load, the source impedance must be low. We call this mismatch from loads, load regulation error where the error is due to the voltage divider % (Rs/(Rs+Rl)) that is if one knows the source and load equivalent R's for transformer and diodes.