text
stringlengths
83
79.5k
H: Adaptability of DALI Click Board for DALI Power Supply 18V DC, 250mA I am developing a DALI Control Device using STM32 board. I have already programmed the board and got some basic results, now I want to test my device with industry standard DALI Control Gear. For that, I need to connect my board to DALI Bus Power Supply as shown in the below figure. I have bought a couple of DALI Click boards long back for my prototype design purposes. DALI Click board has Opto-coupler TPL181, which has the operating range of (5-48)V Supply voltage, 16mA Forward Current, 1mA Collector current and 3.75kV isolation voltage. Now, I am confused whether I can use this DALI Click board as DALI PHY Board? Initially, I will be testing and re-programming my device for very few DALI Slaves (maybe up to 10) and according to standard, each DALI Slave can draw just 2mA. So, I am guessing that even if I use 10 DALI Devices I will be drawing 20mA from DALI Bus Power Supply which is maximum Forward current of TPL181 and my STM32 board or DALI Click board will be in no danger. DALI Click Board has following design My questions are can I use this DALI Click Board as DALI PHY without causing any damage? and Will the each DALI Control Gear (LED Drivers, Lights etc.) draw just 2mA from DALI Power Supply, Is my guess above correct? EDIT: The figure shows that DALI Power Supply is 18VDC, 200mA but it is actually 18V DC, 250mA! It was a mistake! AI: There shouldn't be any reason for damage to occur if this is connected correctly. The DALI PSU is current limited so it will not be damaged even if the DALI line is short circuited. The DALI Click board is bipolar due to the bridge rectifier so polarity doesn't matter. The DALI TX and RX signals are pulled to Vcc on the DALI Click board so you need to connect the Vcc of that to your STM32 board Vcc. The opto coupler current maximum shouldn't come into it. The receiver side opto LED is in a current limiting circuit that has been designed for the DALI supply voltage range. The transmitter side opto transistor won't see the DALI PSU current because it will flow through the FET when it is on, not through the opto transistor.
H: BJT Voltage Divider Bias Circuit problem Let's say we have the following circuit: The known values are: \$\beta=100\$, \$Vcc=2.5V\$, \$V_A=\infty\$ (meaning the Early effect is not taken into account), \$V_{BE}=0.7V\$, \$I_S=8\cdot10^{-16}A\$ (which is the reverse saturation current of the base–emitter junction), \$Rc=1k\Omega\$, \$R_E=400\Omega\$, \$R_1=13k\Omega\$, \$R_2=12k\Omega\$. If we assume that the transistor is opperating in active mode, and the Early effect is not taken into account, we can calculate the collector current by: $$I_C=I_S\cdot e^\frac{V_{BE}}{V_T}$$ where \$V_T\$ is the thermal voltage of approximately \$26mV\$. All the values are known and we get: $$I_C\approx0.394mA$$ However, let's say we won't use that formula and we won't assume the transistor is in active mode. We can find the Thevenin equivalent of the base voltage as: $$Et=\frac{R_2}{R_1+R_2}Vcc=1.2V$$ $$Rt=R_1||R_2=6.24k\Omega$$ Now the equivalent circuit looks like this: If we apply the second Kirchhoff law to the left contour, we get: $$Et-Rt\cdot I_B-V_{BE}-Re\cdot I_E=0$$ Since \$I_B=\frac{I_C}{\beta}\$ and \$I_E=\frac{\beta+1}{\beta}I_C\$, we get: $$Et-Rt\frac{I_C}{\beta}-V_{BE}-Re \frac{\beta+1}{\beta}I_C=0$$ $$I_C=\beta\frac{Et-V_{BE}}{Rt+Re(1+\beta)}\approx 1.07mA$$ As we can see, we get different results. Where did I make a mistake in my calculations? AI: Consider the equation that uses the reverse saturation current $$I_C=I_S\cdot e^\frac{V_{BE}}{V_T}$$ And then consider what happens for a slight difference in Vbe: - If Vbe = 0.70 volts then Ic = 0.394 mA If Vbe = 0.68 volts then Ic = 0.183 mA If Vbe = 0.72 volts then Ic = 0.851 mA So choose Vbe carefully or you'll be a mile out. If you chose Vbe to be 0.72597 volts, Ic would equal 1.07 mA and match your 2nd derivation. It's easy to get fixated on what you believe are correct numbers. As for the 2nd derivation using Kirchoffs, there is an omission that is relevant and will slightly lower the collector current. The omission I refer to is called \$r_E\$ and is the internal emitter resistance. In simple terms it equals \$\dfrac{V_T}{I_C}\$ Or about 26 ohms for an \$I_C\$ of about 1 mA and is fairly significant when you consider that \$R_E\$ is only 400 ohms. So, it's best not to get too fixated on some formulae. I'd be interested to know what a sim produces.
H: Can I use a threaded rod on a wimshurst machine On my Wimshurst machine which is built similar but not the same as http://steampunkworkshop.com/how-build-wimshurst-influence-machine-part-1, I'm using a threaded rod that comes out of the Leyden jar where my collecting comb and spark gap rod connects to. I'm worried that a threaded rod will leak out too much voltage, which is why you have to put a ball over all points in the first place. I'm not sure if a threaded screw will act in the same way or if I should be fine. If I can use the screw rod listed below let me know! AI: Every sharp edge will leak. A threaded rod consists entirely of sharp edges. It will leak horribly, and you will have a hard time building up a charge in your leyden jars. I have built a Wimshurst machine. You don't want any leakage. You will have enough cranking to do even when everything is optimal. Also, don't be surprised if it takes a while before you get your first spark. You have to crank it long enough to charge the leyden jars up to the needed voltage before a spark will jump the gap. Wood is NOT an insulator. Don't depend on it as an insulator for anything that is supposed to be at high voltage. Wood looks rather more like a short circuit given the high voltages and low currents that a Wimshurst machine generates.
H: How arrays work in assembly language I am not able to understand how arrays work in assembly language. In Irvine's book on Assembly Language it's written: *** A string can be divided between multiple lines without having to supply a label for each line: greeting1 BYTE "Welcome to the Encryption Demo program " BYTE "created by Kip Irvine.",0dh,0ah BYTE "If you wish to modify this program, please " BYTE "send me a copy.",0dh,0ah,0 The hexadecimal codes 0Dh and 0Ah are alternately called CR/LF (carriage-return line-feed) or end-of-line characters. When written to standard output, they move the cursor to the left column of the line following the current line.*** Does it create two arrays(I assume the first one is greeting1 and ends at "created by Kip Irvine") and if yes what's the name of the variable that references the second array? Or is there something that I don't understand here? AI: Your asm code fills a series of bytes with the written texts and other characters. The result has no other structure. It's up to your program is this memory area seen as an array. The beginning of the memory area has a symbolic address name greeting1. It's handy when you write some actual code which reads or writes that memory area. - this is why assembly language is used. If you have some decent program development environment, you in theory can use address greeting1 also in a higher level programming language without a need to know its numeric value.
H: Qustion about subthreshold voltage and driving capability Recently,i read some paper,and some of paper said : To reduce the power consumption of VLSI,Supply voltage has decreased to the subthreshold voltage,and the subthreshold circuits consume a very low power,but suffer from the speed degradation due to the low current-driving capability of transistors,and a low-threshold voltage transistor and a large size transistor can improve the driving capability. My question is : 1.What is subthreshold voltage? i know the threshold voltage,are they the same? 2.What is subthreshold circuit?Can anyone give me an example? 3.Why can the low-threshold voltage transistor and a large size transistor improve the driving capability? especially the large size,can it know from some formula? AI: 1) Have you read this article Wikipedia? Subthreshold region means using the MOSFET in weak inversion (as opposed to strong inversion as you'd normally use) by using a \$V_{gs}\$ slightly less than the value of the MOSFET's threshold voltage \$V_t\$. Normally we say that a MOSFET will not conduct when \$V_{gs} < V_t\$ but that's not entirely true, it does conduct but only very small currents (compared to the currents when \$V_{gs} > V_t\$) can flow. So yes, subthreshold is related to the MOSFET's threshold voltage 2) The circuits are often the same as we're used to but running at much less current. So for example an NMOS of W/L = 10um/1um with an Ids = 100 nA would be close to working in subthreshold or weak inversion. You could also use a more "normal current" like 10 uA flowing through a very wide MOSFET (like W = 1000um) which would then also be working in subthreshold or weak inversion mode. It is the current density (Width of MOSFET / current through drain: W / Id) that actually matters. A low current density results in a low \$V_{gs}\$ value. 3) Yes a larger transistor (more precise: a wider transistor) can of course carry more current. But that making the transistor wider increases the gate capacitance so it will also require more current to drive it. Regarding the formulas for subthreshold region: consult almost any book about CMOS circuit design. In general subthreshold circuits are only needed when you need extremely low power consumption (a few uA or less) and you do not need a fast circuit. 1 MHz might already be quite fast for a subthreshold circuit. In practice, in 25 years of designing CMOS circuits I have never deliberately designed a circuit to work in subthreshold. I did design very low power circuits and they might actually have transistors in them working in subthreshold or close to that. But when designing circuits I just make the currents how I want them and I do not bother with transistors being in subthreshold or not.
H: Pump Automatic Pressure Control Electronic Switch What is the name of this small part? AI: It looks like a Reed Switch - a pair of contacts that open or close in the presence of a magnetic field. You'll need to select one that is capable of handling the current requirements for your circuit, which is impossible to determine without knowing more about its function. You'll also need to determine whether the contacts should be normally open or closed, i.e. whether the magnetic field opens or closes the switch.
H: Why am I getting my result as a string of Zs in Quartus? I am new to Quartus, and have been trying to test out my 32-bit ALU on Quartus 13.1. When I try the functional simulation, I get a string of Zs. The results for the individual components, like the fullAdder, display the results fine. What can I do to view the result(inout) in a hex representation? Edit: There was an issue with the Quartus 13.1 edition. It works perfectly fine on the 16.1 edition. AI: Without posting your HDL code, it's difficult to determine what exactly is the problem. However, something outputting Zs in simulation likely indicates that the output hasn't been assigned. For instance, when simulating this module: module simple(input a,b, output o); wire o_internal; assign o_internal = a^b; endmodule // simple o has the value Z. So you probably want to go over your ALU and make sure that you are actually assigning to your output signal.
H: Calculate transfer function H(w) of Schmitt trigger with RC at inverting input I have got the following Schmitt trigger circuit implemented already on a breadboard. As seen, the input signal at the non-inverting and inverting terminals of the op amp is the same, but with the addition of a RC circuit in the inverting terminal. In this way, I am comparing the input signal with a delayed version of the same signal such that I can identify peaks. I got the idea from here. simulate this circuit – Schematic created using CircuitLab I wanted to obtain its transfer function so as to analyse more carefully the influence of the RC network (and ultimately, the circuit itself). I obtained an expression for \$H(w)\$, created some code in Matlab to plot the transfer characteristic (magnitude and phase) of the circuit, and also implemented the circuit in TI-Tina simulator. Using the Tina's AC analysis, I also plotted the transfer characteristic... and my Matlab results differ from Ti-Tina's results. Here the results from Matlab: And here the results from TI-Tina: For this reason I wanted to ask whether the following set of equations to obtain \$H(w)\$ is correct. The voltage \$V_{+}\$ is: \begin{equation} V_{+} = V_{in}\dfrac{R_{f}}{R_{1} + R_{f}} + V_{o}\dfrac{R_{1}}{R_{1} + R_{f}} \end{equation} Considering admittances, \$V_{-}\$ is \$V_{-} = V_{in}\dfrac{G_{2}}{G_{2} + sC_{1}}\$. Putting \$s = jw\$ and \$R_{2} = 1/G_{2}\$, we have: \begin{equation} V_{-} = V_{in}\dfrac{1}{1 + jwC_{1}R_{2}} \end{equation} As \$V_{+}=V_{-}\$, the following expression can be obtained: \begin{equation} V_{o} = V_{in}\dfrac{1-jwC_{1}R_{2}R_{f}/R_{1}}{1 + jwC_{1}R_{2}} \end{equation} Therefore, \begin{equation} H(w) = \dfrac{V_{o}}{V_{in}}=\dfrac{1-jwC_{1}R_{2}R_{f}/R_{1}}{1 + jwC_{1}R_{2}} \end{equation} Any insights? I would appreciate your help. Many thanks. If useful, the code I've used in Matlab is: %--- We fix Rf and R2 and vary C1 R1 = 10; Rf = 10e6; R2 = 10e3; C1 = [1 10 100 1000] .* 1e-9; opts = bodeoptions; opts.Title.String = ''; opts.Title.FontSize = 12; opts.Xlabel.FontSize = 12; opts.Ylabel.FontSize = 12; opts.TickLabel.FontSize = 10; opts.FreqUnits = 'Hz'; opts.Grid = 'on'; H = []; for i=1:length(C1) H = [H tf([-C1(i) * R2 * Rf / R1,1],[C1(i)*R2,1])]; end figure hold on for i=1:length(C1) bodeplot(H(i),opts); end legend_t = cell(length(C1), 1); for i=1:length(C1) legend_t{i} = sprintf('C=%snF', num2str(C1(i) * 1e9)); end legend(legend_t); title({'Transfer function of Inverting Schmitt Trigger', ... sprintf('Rf=%dMOhm, R1=%dOhm, R2=%dkOhm', Rf/1e6, R1, R2/1e3)}); AI: As already told by others, its not useful to apply linear frequency analysis to this, heavily non-linear circuit. Do time domain simulations with different input signals to see the behaviour. A qualitative analysis of one, probably interesting case: If the time constant R2 * C1 is so long that it virtually kills all variation of the Vin, you have only the average Vin at he inverting input of the opamp. This can make the circuit useful as a level detector with self-adjusting treshold. If Vin has long periods of "no change" and the idle Vin is far from the average Vin of the active periods, you get errors when the changes start, because there's wrong average in C1. BTW. Your Rf and R1 give only about 3uV hysteresis. I bet it's useless for anything practical. You should have bigger R1 or smaller Rf or both. The hysteresis = (R1/Rf)*Output peak to peak swing.
H: Optical audio transmitter can't handle speech This is a question relating to my project on an optical audio transmitter. I wired up a very simple device in which I use an audio signal transformer (EI14) to modulate the intensity of a laser beam (a cheap 650nm, 5mW diode) according to the audio output of my phone. The laser is then received by a photo-resistor (G5528 A205) wired up to the microphone socket of my laptop. With this setup I can transmit audio between the two devices. I then upgraded the setup, replacing the photo resistor with a photo transistor in order to improve the response time to the change in the intensity of light (12us rather than 20ms). The quality has improved significantly; it's still not great and I never expected it to be as there's much to improve. But there is one particularly baffling property of this setup. It transmits audio just fine, but it can't handle speech. Not that it transmits it badly, it's more that it doesn't transmit it at all. In this recording, the audio is captured but the vocal that should be in full swing after around 50s is just not there. If you listen really carefully and know what to look out for, there is the slightest of remnants of the original vocal. I thought that it's drowned out by the music and tried recording a voice message and transmitting that instead. Same result - the receiver sees nothing, the received amplitude sketch won't even twitch. Same with an audio book recording. Any ideas about as to how such discrepancy between music and speech could arise? AI: It seems to me that you are probably using a music source that is stereo and, instead of it delivering a mono signal as you expected (L plus R), the connection of the transformer to L and R channels results in L minus R. Given that vocals and bass are usually equally mixed left and right in a stereo mix, the transformer connection you have used will only pass signals that are either in the left channel or the right channel hence, bass and vocal are largely ignored.
H: Opamp inverter with bias voltage on non-inverting input I was reading the datasheet of TDC1000, an ultrasound front-end IC manufactured by Texas Instruments, and had a doubt regarding the biasing scheme (to VCOM) they use for the amplifiers in the Rx path. The power supply for this IC is +5V (VDD) referred to 0V (GND). It can be seen that the non-inverting terminal of the LNA is connected to VCOM, which is simply a DC voltage equal to VDD/2. What is the purpose of biasing it to VCOM? I derived the expression for the output voltage Vo ofthe LNA as Vo=Vcom*(1+Rf/Rin) - Vi*(Rf/Rin) where Vi is input source voltage. If Vdd=5V and thus Vcom=Vdd/2 = 2.5V, Rf/Rin=9, Vi and Vo varies as shown below: Vi(V) Vo(V) 2.2 5.2 2.3 4.3 2.4 3.4 2.5 2.5 2.6 1.6 2.7 0.7 2.8 -0.2 Thus the output varies within the range of its power supply voltage (0 to +5V) when input varies from 2.2 to 2.8V (centered at 2.5V with 0.3V swing on either side). If Vi is outside this small range, Vo will saturate. Since the input signal to this IC is an ultrasound crystal which generates only a few hundreds of mV (-200 to +200mV), how does this configuration work? What am I missing? AI: IMO, your equation is wrong, because it assumes that Rf and Rin are connected on Vref as well. But the opamp is depicted as example, not a real circuit. It can be a different style/method of op amp that just adds some bias to the amplified signal. The gain is 10, then the amplified signal is added on VCOM potential. At least that's what I think they want to depict in that block diagram.
H: How to wire a dual output power converter I am trying to wire a XP Power IZ series dual output power converter. What are the meaning of the pins? Is -V Input the ground from the power source? what about common? AI: Is -V Input the ground from the power source? what about common? -V is the negative side of the power source. It would be connected ground in most designs. (But the -V doesn't need to be ground, strictly speaking. If +V input were connected ground, and -V input were connected to a negative voltage, the converter would work just as well.) Common is the point against which the outputs ar ereferenced. Typically, it would be connected to ground on the other side of the isolation barrier. Notice that ground symbols are different on the input and output sides. -V and Common don't necessarily have to to be connected together, because this is an isolated DC-DC converter.
H: Powering 12 V device from 5/9/12 V powerbank I recently bought the Litionite Falcon powerbank that has one of its outs designed for 5/9/12 V. The 12 V is at 1.5 A which I thought would be enough for my 12 V/0.8 A camera. However, connecting the camera to the charger with a USB to 0.7 mm jack doesn't work. Indeed, I can measure only some 5.3 V coming through the USB cable which explains why the camera doesn't recognise the power. How do I get the 12 V from the powerbank to charge my camera? Do I need a special USB cable? Do I need one with a step up? Or is it enough to wire the USB differently? AI: Your Litionite Falcon power bank uses the Qualicom proprietary Quick Charge 3.0 technology. To enable 9 or 12V, your device have to provide certain sequencing of voltage levels on D+ and D- data lines, aka "handshake". Since your camera doesn't do that, you need to make a device between your camera and the powerbank that generates the D+/D- signals. The details of QC protocols are not publicly disclosed, and only occasional information is available on how to conduct the handshake. The most comprehensive details were eventually published in US patent Application US2014122909. If you can read the awkward patent language, you can start there. Alternatively there are certain ICs that support the protocol, namely AP4370 by Diodes, NCP4371 by ONSemi, and CHY103 by Power Integrations. So some bits of information about actual protocol have been leaked. For example, Texas Instruments PMP9773 Reference Guide describes the protocol as follows: According the description in the CHY100 datasheet, the processes to enter QC2.0 are: − Apply a voltage between 0.325 V and 2 V to D+ for at least 1.25 seconds − Discharge the D- voltage below 0.325 V for at least 1ms while keep the D+ voltage above 0.325 V − Apply the voltage levels in Table 3 to set the output voltage. (must keep the D+ voltage above 0.325 V) The table of DC voltages that you need to set on on D+/D- wires looks like this: To get an idea how the QC protocol has evolved to version 3.0, the following presentation can help. Version 3.0 introduces pulsing protocol, each pulse can decrement or increment VBUS by 200 mV. So this is up to you which way to experiment with. Given your 12V@0.8A camera requirement, you probably be better off with a 5-to-12V booster from eBay.
H: ws2812b data resistor I use an ESP32 with a 3.3v->5v level shifter to send the info to the data line of a 150 led-strip (WS2812b). It is recommended to use a 300-500 ohm resistor between controller's output pin and the strip. But the problem is, if I add it, leds doesn't turn on and if I remove it, the leds turns on as expected. Can I safely remove the resistor? I'm not sure if the level shifter is enough or adding extra resistance although don't see an additional voltage drop. AI: It would be better to leave it in-it can prevent damage to the driving circuit if the ground bouncex below zero. A lower resistance is better than nothing, but don’t cut it too close. If you have long wires running to the LEDs, put the resistor at the LED end.
H: Charging Li-Ion 4S with TP4056 (DC isolation) I want to power a "suitcase boombox" amp using 4 Li-Ion 18650's. To charge these batteries I'm thinking TP4056, because they're small cheap and have nice charging features. From what I have read so far that even at first glance having multiple TP4056's with their own DC-DC isolation seems weird, it's alledgedly not a bad way to mitigate some of the battery balancing downsides. Two questions: Are there any pitfalls of this approach? Apart from DC-DC isolation (see next question)? As discussed on the internet and here on EE, to charge the batteries using multiple TP4056's, one needs isolation. Commonly mentioned is the isolated 0505S DC-DC converter. Big downside is that it's only rated for 1W. To get most out of the charging process you'd want more like 5W. I'm not finding any cheap/small/simple DC-DC buck converters, but I do see some AC/DC 220VAC>5VDC converters that claim to have isolation, such as this one: https://www.aliexpress.com/item/AC-DC-5V-700mA-3-5W-Power-Supply-Buck-Converter-Step-Down-Module-for-Arduino/32451069599.html (or will this burn down the house?) Do these AC-DC converters provide the required isolation (or how to tell?) AI: From the pictures of the circuit board that looks like it has "functional isolation", but the parts on the underside seem a little close together for me to say that it could have "safety isolation" too. Also that greem film capacitor looks like the wrong type for its location. So, the batteeries probably won't explode, but it might electrocute you.
H: Decoupling capacitors for 7400 type chips I am currently designing a circuit involving several 7400 TTL series ICs that include SSI ans MSI chips, so I may need to include several decoupling capacitors for the circuit design. Documentation for decoupling capacitors for a group of 7400 series chips is difficult to find, so I don't have a good idea for how many chips should share a 0.1uF ceramic capacitor. I need to know how many SSI, MSI and LSI chips of the (THT) 7400 series should share a single capacitor. How should I place the capacitors for them to be most effective? Would the CMOS 4000 series share the same properties? AI: You can just group the 7400 series with the 74LS00 and 74S00 series. They are all TTL circuits with 74S00 being the fastest, but also heavy current consumption. Those IC's were upgraded to 74F00 (F means FAST) in the late 1980's which could be clocked at 120 MHZ (74F190 counters). Still they were TTL as far as transistion from high to low on inputs and outputs. They were all 5 volt logic and maximum motherboard speed was about 50 MHZ for several years. It was common practice to install a .1 uF capacitor as close to the ground and Vcc pins as possible, as they made a lot of noise due to their totem-pole outputs. As the 1980's came along the CMOS CD4000 series came to market with a working voltage of 3 to 12 volts. In some cases they could take 15 volts on Vcc. They were created with battery powered devices in mind, consuming only 3 uA at 3 volts with no clock, or a stopped clock. They still exist today for the most commonly used versions such as the CD4013 flip-flop and CD4066 quad analog switch. In every case engineers played it safe and installed .01 uF capacitors for the CD4000 series and a short hop to a larger 10 uF capacitor, as the CD4000 series did not cause large current spikes when changing states. Along comes late 1990's and suddenly we have 3.3 volt logic that is CMOS, yet the manufactures suggest using .1 uF capacitors at the IC body. Then major changes all related to faster speeds. 1.35 volt common returns and PECL logic made from CMOS suddenly upped speeds 100 times. Now we had GHZ CPU's, and still the need today even for using .1 uF capacitors. For analog circuits an additional 4.7 uF and sometimes 10 ohm to 33 ohm resistors on the Vcc pin for extra quiet performance. Todays motherboards may have hundreds of .1 uF capacitors, especially to AC couple the 1.35 volt return bus for GHZ logic and the CPU's. 7400 series or not, don't skimp on such crucial and cheap parts that help all but guarantee a working and quiet board. Use the .1 uF on ALL Vcc and Vee pins, at the IC body if possible.
H: Push-pull amp in class A what it mean I take next sample schematic. This is only an output stage of any real power amplifier. It's clear when it's work in B-class the every BJT amplify only one half of input signal. But what it means if it works in class A. I know that the current always go through BJT if it works in class A. And if signal is too small but enough to open the BJT it will amplify both negative and positive signal swings before the signal level achieves some value above that only part of that signal will be amplified with "N" or "P" stage accordigly. Ok, if both parts of small signal amplified with every parts of a power amplifier, how do they added on load resistance? They both must have opposite direction and must differ from each other and the 0 must be on load. Here is a trick that I can't understand. Any suggestions? AI: simulate this circuit – Schematic created using CircuitLab A class A amplifier also biases the transistor just like the class AB. When the transistor is biased it resembles the load resistor. This is similar to the resistor string to the right. At the bias point, with no input Q1 is adjusted to match R1. A class A amplifier will invert the signal but because of the biasing any signal even a small one will operate the amplifier.
H: DC voltage in collector after active load I am trying to analyse the DC Q bias point of the following circuits with a current source/mirror involved. I do not know how to calculate by hand the Q2 and Q5/Q4 collector voltages. All the currents and the remaining voltages are easy to get. Just some thoughts: Are Q5 and Q2 saturated? In that case, why? Why cannot Q4 be saturated? Could cascading another stage could put them in active state? Circuits: I used a LTSpice simulation in order to get some intuition, but still I am confused, as I am not even able to find any relation between the differential pair and the single transistor circuit. Note: I assume, when doing calculations by hand, VBE = 0.7, VCE=0.2 when saturated, beta = 100 and all transistors are matched. AI: A few things to get out of the way, first. Spice programs treat identical parts, identically. If you put two BJTs of the same part number in a circuit that depends upon them behaving exactly the same, the circuit will work perfectly in Spice. But when you built it with real, discrete parts which are never really the same, you'll find out just how badly the circuit actually behaves. A current mirror is a classic example of where spice will provide simulations that work a lot better in the simulation than they do in practice. Spice runs the program using a fixed temperature for all of the parts. If a part would, in reality, heat up a lot and change its behavior substantially, you won't see that effect in the Spice simulation. It also won't simulate exploded parts. Or pretty much anything else where thermal effects are taking place. You can do end-runs around this at the expense of greatly complicating your schematic with a bunch of behavioral stuff you add. But it's a pain. You can, however, ask Spice to perform multiple simulations where everything changes in temperature at once. Whether or not that's helpful is another question. Garbage in equals garbage out. If you stick in a circuit that cannot really work in practice, sometimes Spice will seem to provide results that to the ignorant will look good. That doesn't mean it will ever work in practice. But Spice will happily process all those differential equations for you and solve simultaneous linear equations pretty much as good as any other program. So you will get numbers. But numbers don't mean reasonable numbers. Or that the circuit explores a useful idea. I just want to make sure that you understand a few of the limitations of using Spice on your circuit. The circuit on the left asks a completely worthless question. It says, If I use two absolutely perfectly matched BJTs in a differential pair topology, and supply them perfectly equal voltages at their bases, and use a current sink also made from perfectly matched BJTs now at their shared emitter, and provide no decent means for the circuit to function correctly, then what will happen? The answer is perfectly useless results. But we don't need to bother thinking hard on this one. To start, the differential pair should split the current sink's current exactly in half (since these are perfectly matched BJTs.) The bases of the differential pair BJTs should require perfectly matched and equal base currents and so the collectors should have perfectly matched and exactly equal collector currents, too. The current mirror BJTs also each require base currents. But all of that is being stolen from \$Q_1\$'s collector. So this means the remaining current for the collector of \$Q_3\$ will technically be reduced as compared to what is being asked of the collector of the perfectly matched \$Q_4\$ in the mirror. This means that \$Q_3\$'s \$V_{BE}\$ will be too small to drive \$Q_4\$'s collector current to a value that is sufficient to accept the collector current of \$Q_2\$ (which hasn't been reduced by the mirror's base current requirements.) So collector current of \$Q_2\$ is more than can be handled by \$Q_4\$. And that is a problem. So? The solution is to saturate poor \$Q_2\$. If you can push it into saturation enough, the base current increases for \$Q_2\$ and this subtracts a little from what remains for the collector. And that means that \$Q_4\$ can handle the collector current from \$Q_2\$, now. So \$Q_2\$ goes into saturation. Oh, well. That's life. The whole design is wrong-headed. For this to work, you need to provide an escape mechanism for the difference current and your schematic doesn't show such a wire going anywhere. So what do you discover? Well, with perfectly matched BJTs put in a situation where the only solution is to saturate one of the BJTs... then one of the BJTs winds up in saturation when Spice simulates it. Surprise? No. Spice took an impossible and unrealistic problem you fed it and found the only possible solution available to it: saturate \$Q_2\$. The right schematic also isn't hard. The base current is \$\frac{0\:\text{V}-V_{BE}-\left(-10\:\text{V}\right)}{1\:\text{k}\Omega}\approx 9.3\:\text{mA}\$. The collector current is \$2\:\text{mA}\$. The effective \$\beta=\frac{2\:\text{mA}}{9.3\:\text{mA}}\approx 0.215\$. Um... \$Q_5\$ is very saturated! Some answers. Yes. Both \$Q_2\$ and \$Q_5\$ are saturated. See above. Because in order to make \$Q_2\$ saturated in order to satisfy the requirements you imposed in your circuit, its collector has to be driven down close to its emitter. Which means, since the collectors of both \$Q_2\$ and \$Q_4\$ are wired together, that the collector of \$Q_4\$ is pulled down hard towards the emitter of \$Q_2\$... which happens to be far away from the emitter of \$Q_4\$ in this case. So \$Q_4\$ can't saturate here because \$Q_2\$ needs to be saturated, instead. And in this circuit, only one of them can be saturated. So guess what? It could. Depending on what circuit you hooked up and where you hooked it up.
H: Can we form any circuit using half adders only? (assuming we have as many half adders as we want) Using various combinations on half adders I get these outputs: {0,1,1,0},{0,0,0,1},{0,0,0,0},{0,1,1,1} And I also know that we can form any gets using NAND gates or AND and NOT gates but I am not getting any output of the form NOT so should I assume that we can't form universal gates using half adders? AI: Sure you can, but why would you want to? A NAND gate is a universal gate, as you have agreed. Here's how to form a NAND gate using a Half Adder: simulate this circuit – Schematic created using CircuitLab That said, a NAND gate is the simplest two input gate you can form, at least in NMOS/CMOS, so it seems like a rather backwards step...
H: Project development: control of a DC motor Is it feasible to control a DC motor as it is described in the following picture? What would be a possible application of this circuit? It is kind of tricky because the operational amplifier's current is feeding this machine, so I will either need a high power OpAmp or a low power DC motor, but I can't see a real life application of the latter. AI: The circuit is an analog regulator used to regulate DC motor speed. The amplifier is essentially a power amplifier like a power operational amplifier. It would usually be implemented using a controlled rectifier or pulse width modulation scheme. A DC motor is basically controlled by controlling the armature terminal voltage. However regulating armature voltage does not exactly regulate the speed. Speed is proportional to the motor's counter emf. Counter EMF is less than the terminal voltage by the amount of voltage drop across the armature resistance (IR drop). That is proportional to the armature current which is proportional to torque. To prevent the motor speed from dropping when the torque increases, it is necessary tor provide additional feedback proportional to the IR drop. That is called IR compensation. The Rs in the diagram is a shunt resistor that provides IR compensation feedback. The amplifier should have some kind of current limiting circuit provided internally to limit the maximum current supplied during acceleration or mechanical overload.
H: Is the Diode RMS Reverse Voltage spec important in a DC circuit? Consider a 12Vdc automotive power supply design where +12V power connects first to a Vc=49.9V bidirectional TVS diode (e.g., TPSMB36CA) and subsequently connects to an SB370 Schottky diode. The SB370 is used for reverse polarity protection and was chosen over a 1N4004 for its lower forward voltage drop. A Load Dump spike could be 100V to 125V in some cars, but the TVS would clamp that spike to Vc=49.9V. The SB370 datasheet cites Vr=70V (DC blocking voltage) and Vr-rms (RMS Reverse Voltage) = 49V. The Vr=70V of the SB370 is higher than the Vc=49.9V of the TVS and therefore would appear to be safe, but what of the Vr-rms=49V spec (0.9V lower than Vc)? Does the Vr-rms spec not matter in this automotive DC circuit? Thank you. AI: When the data sheet talks about the SB370 reverse RMS voltage it is assumed to be a sinewave hence, the peak will be \$\sqrt2\$ higher than 49 volts i.e. 69.3 volts and this pretty much ties in with the peak repetitive reverse voltage statement of 70 volts. They mean one and the same thing for this diode. Regards the TPSMB36CA limiting at 49.9 volts, this is only true if you operate the device with some form of current limiting (to prevent an excessive current flow) to stop the device becoming damaged. You must respect figure 2 in the data sheet because it tells you that for a peak power of 609 watts (49.9 volts x 12.2 amps), the duration of that pulse cannot be much greater than 1 ms and this sounds too low for a load dump scenario. You should also calculate what temperature rise will occur for over-voltage situations and note the temperature coefficient of breakdown voltage. It is 0.096 %/degC. In other words, if the device warmed by 100 degC during a transient, the final breakdown voltage could be up to nearly 10% higher.
H: Mosfet as a switch for DC motor The below image shows the circuit schematic: simulate this circuit – Schematic created using CircuitLab I wanted to check the turn off time for the mosfet switch varying the pulldown resistance. resistance=2ohm Resistance=100kohm But in both cases there wasn't any noticeable difference in the turn of time . theoretically increasing the resistance should increase the time constant (RC) therefore increase the turn off time . what do you guys think is happening here ? Mosfet :IRF 1405PbF(sorry I couldn't upload the image showing the electrical characteristics of the mosfet) AI: This is a pretty simple answer. Judging from the question posed by Dean Franks: Are you measuring things accurately enough to measure the difference between 1ms and 11ns for the 2 ohm resistor? And the answer by OP: No measuring instruments just the naked eye lol I think the answer is to get an oscilloscope or something similar that you can take measurements with. Of course there is no noticeable difference with the naked eye between 1 ms and 11 ns.
H: High Frequency Transformer Configuration for an Electronic Circuit Voltage Supply simulate this circuit – Schematic created using CircuitLab Dear Members, I could not find this question on this forum and want to have few opinions in this regard. I want to choose a transformer for boosting around 10V to 30V in a Flyback converter configuration and later form +15V/ -15V. This transformer is used to generate voltage which is used to drive electronic components like OpAmps, Mosfets etc. The Power Rating will be around 3W. My question is : What will be the difference if I take the Transformer Configuration A with only one Primary Winding and one Secondary Winding or if I take the Transformer Configuration B with connections shown having one Primary Winding and two Secondary Windings joined together as one. Will there be any difference between both Transformers regarding EMI performance if same Primary to seconadry Turn Ratios and Power Ratings are taken ? Which configuration has better performance regarding EMI and voltage stability over the other ? Thanks alot for sharing your experiences. AI: Below is an example of the use of a transformer with one primary winding and two secondary windings connected in parallel: - Yes, you can connect secondary windings like this providing they have the same number of turns and are wound on the same common magnetic core. I want to choose a transformer for boosting around 10V to 30V in a Flyback converter In your pictures, you have used dot notation on windings and so please note that flyback transformers are usually drawn with opposite dot notations as per my diagram above. It's not wrong what you have done (because you haven't shown output diodes) but it does help (it's less confusing) if you stick to conventions. Will there be any difference between both Transformers regarding EMI performance if same Primary to seconadry Turn Ratios and Power Ratings are taken ? I don't see this as a problem - you parallel secondaries to improve their combined current delivery to the load. You could use a single secondary made from thicker wire of course and this would perform nearly identically. Which configuration has better performance regarding EMI and voltage stability over the other ? If both can deliver the load current and both have equal volt-drop performance (due to leakage inductance and resistance) then you won't see much difference. Of course, if you wanted to delve into skin effect problems you might find that a single wire secondary might need a tad more cross sectional area of copper than two bifilar wound coils but I'm not sure your design (3 watts) warrants this depth of enquiry. I also came across this design (that uses parallel primary windings) and doesn't seem like a million miles from what you ultimately are trying to achieve: -
H: Experimental mounting for high power LED I have bought this UV LED for experimental purposes. I do not want to go to the expense of creating a miniature PCB to mount it, but I obviously need to heatsink it. It is tiny - 3.3mm x 3.3mm and can dissipate up to 3W (when properly mounted and cooled, I assume). Are there any shortcuts people can suggest? It is not (yet) for a production item and it only needs to work for a few hours evaluation. AI: Why don't you use a small heatsink (less than 40x40mm), solder two solid core wires to the two pins on the LED, make sure they don't short on the heatsink (with some insulation tape or something similar), and bend the wires down and around the fins of the heatsink. This might sound kind of... unprofessional, but it's something I've done when working with small one watt LEDs. And you also said that It's only for a few hours of evaluation. The solid core wires hold the led in place really well. You could also add a bit of thermal paste to the LED to increase thermal conductivity. And also a small fan to keep temps low, if that is required.
H: Why does M4 turn off and Vout be equal to zero? From this picture,when\$ V_{in} = V_{DD}\$,I know \$M_1\$ and \$M_3\$ will turn on,and \$M_2\$ will turn off because their \$ V_{gs} \$ are \$V_{DD}\$ and 0 ,but why will \$M_4\$ turn off,and \$V_B=V_{DD}\$, \$V_{out}=0\$ ? If i know \$V_{out}=0\$ and \$V_B=V_{DD}\$ first,then of course i know \$M_4\$ will turn off ,but if now i don't know the value of \$V_B\$ and \$V_{out}\$,and neither do \$M_4\$ ,how do i know \$M_4\$ turn off,and \$V_B=V_{DD}\$, \$V_{out}=0\$ ? AI: As the gate of M4 (which is a PMOS) is \$V_{DD}\$, the only way to make M4 turn on is the take the bulk or drain or source connection of M4 at least \$V_t\$ above \$V_{DD}\$. So we would need \$V_{DD} + V_t\$ to be present somewhere in the circuit. This can only happen if capacitor \$C_B\$ is first charged (with \$V_A\$ = GND and \$V_B\$ = \$V_{DD}\$) and then discharged by making \$V_A = V_{DD}\$ then there would for a short time be \$2*V_{DD}\$ present at \$V_B\$. But as \$C_B\$ discharges through M5, which is on, see Andy's answer, so the \$V_{GS}\$ of M4 must be zero, hence it is off.
H: Are Gerber files of inner layers interchangeable? Regarding the Gerber files of a 4 layer PCB with 2 inner layers: Top layer: Analog signals Inner layer 1: Digital signals (SPI and I2C) Inner layer 2: Power planes (+3.3V and +12V) Bottom layer: GND plane For better isolation between analogue and digital signals and to prevent digital noise from coupling back into analog signals, I would like to swap the two inner layers, so that there is a power layer between analogue and digital. Unfortunately, my EDA software (KiCAD) does not permit to reorder the stack. This is why I'm wondering if it would be possible to simply rename the two Gerber files before sending them to the producer, so that the layers get swaped? My guess is that the layers are only connected through vias and PTH's for THT components. Am I missing something here, that might break the PCB, if I manually rename the files ? AI: The Gerber files do not specify the order of layers. As long as you don't use blind or buried vias, the layers can be stacked in any order. The file names for the individual Gerber files may vary between different CAD systems, and may or may not imply the desired stack-up order. I always included a "readme" file with my PCB order specifying the desired stack-up order.
H: Is a hole in a conductor the same thing as a positive ion? The flow of current in a (semi-)conductor is often described as holes and/or electrons moving between atoms. In this context: Is an atom with a hole the same thing as a positive ion (cation), and Is an atom with an electron the same thing as a negative ion (anion)? I already understand what cations and anions are. And I am not asking about current flow. I just want to know if the two questions in the above list are true or false. However you answer, kindly explain your reasoning (why is it yes, or why is it no)? I have already searched related questions here, but none of them seem to answer this. EDIT: I screwed up when I asked this! I just added the text in italics "an atom with" above, which is what I had in mind from the start. Sorry. AI: Is an atom with a hole the same thing as a positive ion (cation), and Yes, both are ways of referring to an atom with a net positive charge, but a conductor or semiconductor has many holes but are not usually thought of as having ions, since the holes (or excess electrons) are moving all the time and therefore no atom continues to be an ion for long, so if we are talking about holes e.g. in semiconductor physics, I would be surprised to hear discussion of ions. (Contrast ionic compounds, which have distinct cations and anions, and are most commonly, but not always, non-conductive.) Is an atom with an electron the same thing as a negative ion (anion)? Well, most atoms always have some electrons; I assume you meant "with an excess electron", in which case my answer is the same as above — yes, that is the same thing but it depends on context which you should use.
H: Why do most FPV 5.8 GHz video transmitters use PAL or NTSC? I am not much into the details of video transmission in general. However, I wonder since most displays and electronics are digital, why have suppliers for FPV equipment chosen an analog transmission system, which moreover is limited to 30/24 fps and low resolution? Would it be possible to enhance the resolution by simply choosing a different (digital) protocol without changing the channel number? AI: As others have said, PAL and NTSC are analog and have virtually no latency. This is extremely important when flying FPV drones at high speeds. Having latency will tell you where you’ve been, not where you actually are. I have a large video/photography drone (Yuneec Q500 4K). It transmits the video digitally on 5.8 GHz using the 802.11A protocol. (Just like your home Wi-Fi network.) I have measured the latency of the video feed at 282 milliseconds. This is because the video has to be converted from analog to digital, and then the back again at the receiver. That may not sound like a lot, but it is when you are traveling at high speed and trying to avoid obstacles. Here is a quick chart that will show how 282 ms of latency affects where you think you are, compared to where you actually are. Speed Error Speed Error [MPH] [Feet] [km/h] [Meters] ---------------------------------------------- 10 4.15 16 1.26 20 8.24 32 2.51 30 12.39 48 3.78 40 16.54 64 5.04 50 20.63’ 80 6.29 Considering that some FPV drones can travel at 60+ MPH, you can see how much of an impact that video latency would have. As far as FPS goes, 30 for NTSC and 24 for PAL are standard frame rates and more than enough for a smooth picture. Higher resolution would also require larger/heavier hardware. When FPV racers are trying to shave fractions of a gram off of their drones, they will sacrifice resolution for a reduction in weight. I hope my explanation helps!
H: Switch between 3V battery and 5V USB as power source I know there are several other threads with similar questions, but my requirements are different. So far I couldn't find a solution even though I read through several of these topics. My circuit is normally powered with a 3V coin cell battery. I would like the circuit to be powered by USB as soon as a 5V USB power source is connected to my circuit - to save power from the coin cell. Obviously I was looking at the well known diode-or setup but in my case I cannot use diodes, because even Schottky diodes would have a voltage drop of around 0.3V. Thats too much when running on the coin cell. I cannot accept any voltage drop on the coin cell rail and max. 0.1v drop on the 5V USB rail. Then I was trying to experiment with a p-channel mosfet, but I'm not sure, if my solution works out in reality. I'm just a maker, not an electronics guy :-( Attached is the schmeatic. Beside using a power mux ic (which would cause quiescent current), is there any other discrete solution with high energy efficiency I could use? Thanks! AI: One way of doing it in case you got an op-amp, N-mosfet (or NPN BJT) and P-mosfet laying around is doing something like this: Here's the link if you want to interact with it. The scaling is to get the values to safe voltages, not all op-amps can work outside of their voltage supplies, so using a voltage divider will help scale the input to safe intervals. If all the resistors in the voltage dividers would've been 100 kΩ, then the op-amp would have to compare 1.5 V and 1.49 V, this means that any small noise or mismatch of the resistors would incorrectly shut off the 3 V supply. However, using a 80 kΩ resistor instead of a 100 kΩ gives some margin for error, now the op-amp has to compare 1.5 V and 1.33 V. And 1.5 V and 2.22 V when the 5 V is connected. All in all it's just an op-amp acting as a comparator turning off the 3 V supply, if you are feeding with anything above 3.4 V (because of the 80 kΩ resistor) then the 3 V supply gets shut off, otherwise the 3 V will supply. This mean that if you connect, say, ground or any other voltage that is lower than 3 V to where the 5 V should go, then you got yourself a transistor that will make some magic smoke. In case you've not noticed it yet, the P-mosfet is upside down, and that is on purpose. If you do choose to make this circuit, make sure you connect the drain and source correctly. The diode in parallel with the P-mosfet is not an actual component, it is the body diode which is a part of the P-mosfet. The load, which is a resistor in this case, can contain the op-amp in the circuit above.
H: Why is a common mode bias required for this differential amplifier? TI makes the ADS1118 which is an ADC with a programmable gain amplifier (PGA). In one of their application examples (page 4), they indicate that small differential input voltages (1-2mV) must have a common mode bias. This seems unnecessary since common mode voltage will be rejected by the differential amplifier. Why is a common-mode bias required? Will this work if the common-mode bias is removed? AI: The inputs need to be biased at the 'midpoint', between the supply voltage (Vdd, 3.3V) and GND. The midpoint would be 1.165 volts. This places the signal into the middle of the output range of the Diff Amp. Otherwise, if the inputs to the DA were left floating, the output of the DA could ride along the top or bottom rail. The 1~2mv signal is what is being generated by the thermocouples and has nothing to do with biasing. (Refer to the picture attached.) Biasing the inputs of the DA allows the signal to reside at the midpoint, which allows it to swing either positive or negative. (Center waveform.) If the inputs were left floating, the output signal would ride along the top or bottom rail and would not be able to swing high or low properly. (Top and bottom waveforms.) I hope that answers your question. Please ask and I will try to explain it in more detail if needed.
H: Understanding 1N5400 series datasheet I am using 1N5408 diode in a rectifier circuit. The datasheet can be found here. I find myself in a very bad state while understanding symbols and meanings while reading datasheets. Can anyone have a look and explain to me what the following terms signify, that is, what do you mean be them? VRPM I F(AV) IFSM VF IRR Also, can anyone confirm that the rectifiers can be used for converting 9V AC to 9V DC? AI: VRPM - Max repetitive peak voltage. Can be applied repeatedly. This is often taken as the usual value of the maximum reverse voltage you'd expect the diode to be able to block, but as it's in the maximums section, it would be prudent to derate somewhat. I F(AV) - The average forward current. This governs the heating of the diode. That's why they specify a lead length and a temperature. IFSM - The short term surge. This is the sort of current you get into an uncharged capacitor when first switched on. It tends to be limited by the transformer impedance. It's very much larger than the average current, this is one robust diode. VF - Forward voltage. We normally say 0.7v for silicon diodes, but this is specified for 3A so includes the effects of residual resistance. IR - Reverse current, usually measured at maximum reverse voltage and maximum temperature, where leakage is worst. For some diodes, this can also be a significant source of diode heating.
H: How to find the required capacitor rating for a circuit? I am planning a 220V AC to 9V DC rectifier circuit. In order to provide a stable DC output, I've decided to put a capacitor. But how do I find the rating of the capacitor I should use? In my circuit, the current flowing will be a maximum of 5mA, and the power will not be greater than 50mW. Keeping these in mind, can anyone suggest what capacitor I should use? AI: The choice of the capacitor value needs to fulfil a number of requirements. In the first case the value must be chosen so that its time constant is very much longer than the time interval between the successive peaks of the rectified waveform: \$ R_{load} C >> \frac{1}{f}\$ where: \$R_{load}\$ = the overall resistance of the load for the supply \$C\$ = value of capacitor in Farads \$f\$ = the ripple frequency - this will be twice the line frequency a full wave rectifier is used. As there will always be some ripple on the output of a rectifier using a smoothing capacitor circuit, it is necessary to be able to estimate the approximate value. Over-specifying a capacitor too much will add extra cost, size and weight - under-specifying it will lead to poor performance. For cases where the ripple is small compared to the supply voltage - which is almost always the case - it is possible to calculate the ripple from a knowledge of the circuit conditions: Full wave rectifier: \$V_{ripple} = \frac{I_{load}}{2fC}\$ Half wave rectifier: \$V_{ripple} = \frac{I_{load}}{fC}\$ These equations provide more than sufficient accuracy. Although the capacitor discharge for a purely resistive load is exponential, the inaccuracy introduced by the linear approximation is very small for low values of ripple. It is also worth remembering that the input to a voltage regulator is not a purely resistive load but a constant current load. Finally, the tolerances of electrolytic capacitors used for rectifier smoothing circuits are large - ±20% at the very best, and this will mask any inaccuracies introduced by the assumptions in the equations. More: http://www.radio-electronics.com/info/circuits/diode-rectifier/rectifier-filtering-smoothing-capacitor-circuits.php
H: Measure duration of a non periodic message on FPGA with VHDL I'm trying to measure the duration of a non periodic signal(message) on FPGA, as below : I want to measure the duration of the measured signal by counting the number of reference clock's period. I set the first rising edge as the start trigger, but how should set the stop trigger? I have no idea how to do this, can you give me some advice? I want to implement this on FPGA with VHDL. AI: Assumptions: A "message" is a collection of transitions, with some sizable gap between messages that contains no transitions. The clock period is smaller than the shortest gap between signal transitions. Here's a block diagram; I'll leave it to you to convert it to your favorite HDL. simulate this circuit – Schematic created using CircuitLab U1 is a two (or more) stage synchronizer that makes sure the input signal is synchronous to the clock. U2 is a synchronous edge detector that outputs a 1-clock pulse for every edge at the output of U1. U3 is the "timeout" counter. It gets reset on every edge, and the message is over when it overflows. U4 is a simple set-reset FF that keeps track of whether we're "in a message" (output is low) or "between messages" (output is high). It gets set when U3 overflows, and cleared whenever U2 detects an edge. U5 is the "duration" counter. It needs to have enough bits to count the duration of the longest message. It starts counting on the first edge detected, and keeps counting until U3 overflows. U6 is a simple N-bit register that captures the value of U5 for each detected edge. When U3 overflows, it contains the count for the last edge that occurred in the message. Each of these blocks is just a couple of lines of code in either VHDL or Verilog.
H: Finding the equivalant impedence of a circuit I've been given the above circuit and asked to find the equivalent impedance, however after drawing it half a dozen different ways I cannot find a way to simplify it. I can simulate it, and I get (what I think is the right answer) of -0.934-j1.467 ohms, but I'd like to know if there's a way to do it manually, and if there's some useful trick that I'm missing out on here. Or maybe its something simple! AI: As John D points out: you can do a Delta to Y conversion. Or Y to Delta. That is where you take a set of three resistors (or impedances) and convert them to three other values. His link points to the wikipedia where they use all resistors. But instead of R you can use a Z and it will still work. simulate this circuit – Schematic created using CircuitLab In the schematic above you e.g. take Z2, Z3 and Z5 (That is a Y-formation or T-formation) and make them into a Delta Za, Zb Zc. simulate this circuit I assume you can take it from there.
H: How should I go about protecting a supercapacitor from reverse polarity? I have several supercapacitors in series, and they're quite old and probably drifted from their original spec by quite a bit. Completely draining them in series will draw at least one of them below 0v. How should I protect against this? I thought to use a schottky diode, but the best one of those would still allow the caps to be drawn to -0.3v. Also, many ideal diode circuits need some power, but the goal here is to protect fully discharged caps. The ideal behavior I'm looking for is something that acts like an open circuit above 0v, and looks like a dead short below (or just shy of) 0v. AI: A -0.3 volts will not hurt any capacitor, even one rated for 5 VDC. Also Schottky diodes are available in very high current modules if needed. A negative 3 volts may hurt them by confusing the chemicals in the dielectric, much the same as reverse charging a battery. But there the similarity ends. Capacitors can be drained to zero volts for a long time, yet are ready to use again if needed. Supercaps have a higher leakage current when power is first applied but with time the leakage current drops way down. There is no 'perfect' circuit to do want you want without a MPU to monitor and use MOSFET's to act as a bypass diode with near zero voltage drop. If you can keep the reverse voltage under -1.0 volts the capacitors should be fine. EDIT: Also something to consider is load balancing by placing a 1 Meg ohm resistors across each capacitor. This ensures that they charge up equally and maintain an equal voltage across each one. The penalty is a leakage current of 1 uA per volt of charge on each capacitor. Addendum: Please see the following link for details of many types of super-capacitor. The small PCB mount type generally have a gold foil or carbon conductors, so do not worry about them going bad due to lack of use. Their only real enemy is over voltage or a high reverse voltage. https://en.wikipedia.org/wiki/Supercapacitor A statement from that article says: As of 2015, a CDC supercapacitor offered a specific energy of 10.1 Wh/kg, 3,500 F capacitance and over one million charge-discharge cycles. I do not think your supercaps are in danger from just sitting around, but watch out for lead corrosion due to moisture. Also, the PCB mounted type can charge quickly but may supply only 10 mA to a short circuit. It is due to how electrons are 'trapped' inside it and 'migrate' a long winding path to the pins. A cheap 1N5822 schottky diode is plenty good enough, unless you want one with a lower Vdrop. For better charge-balancing you can use 10 K or even 1 K resistors. Add a 4PDT relay power by 3.3 or 5 VDC to engage the 1 K resistors when charging. When not charging the relay changes contacts to the 1 Meg ohm resistors for maximum storage time.
H: How to wire a 5V 6-pin relay switching 110V AC @ ~35 W Update: Thanks for the help -- working circuit below! I am trying to turn on a 110V AC array of LEDs with a small 5V relay with some explosive results!! :-) I am not doing something right as I have successfully blown-up two of my relays... Relay: HK4100F-DC5V-SHG Spec Sheet: https://img.ozdisan.com/ETicaret_Dosya/445413_4369639.pdf There are 6 pins, the middle two pins are the coil. When I apply 5V DC I can hear it clicking which I think is expected operation. I attach the load (array of LEDs) on the left of the photo. I attach 110V AC from a wall socket on the right side. Case 1: a) No voltage to Coil. b) Load and Power Source attached c) Result is the LEDs are quite dim. When I applied DC to coil the LEDs went out. I'm not exactly sure why my relays exploded on subsequent attempts. Could it be that I reversed wires on the load, or power source? Should this even make a difference with AC? Hoping this makes sense! Update: Working circuit below: AI: As I read the datasheet, the two terminals on the left are connected together, and aare the common terminal (moving contact) of a single-pole, double throw switch. The top right is the Normally Open (NO) contact - with the relay not energized, it is not connected to anything. The bottom right terminal is the Normally Closed (NC) contact. With the relay not energized it is connected to the two left terminals. When you energize the relay, the top right terminal will be connected to the two left terminals, and the bottom right terminal will have no connection. Given that description, your drawing doesn't make sense - you seem to be applying 110 VAC between the NO and NC terminals.
H: Do I have a true rms meter I bought an Excel XL 830L multimeter with the understanding that it was a RMS voltmeter. I also have UEI ElectroMate DM 200. When I checked the output of my APC unit with both meters, the readings were about the same. Is there a way to tell if a meter is an RMS type? AI: Please see this link :http://www.testequipmentdepot.com/uei/digital-multimeters/dm200.htm The meter is a discontinued model also available from Ebay. It is NOT a true RMS meter so non-sine waves will give a ambiguous reading at best. Usually a non-RMS meter will see only the most peak voltage present and not the true average. If you want an excellent true RMS DVM stay with a known good manufacture like Fluke. The Fluke 87 III series is $400 USD, but it is top dog for accuracy.
H: How dangerous is a magnetron? I took apart a microwave, and when I saw the magnetron, I conviniently remembered that I had heard that magnetrons were dangerous. I decided to research this a bit further (I know, great timing) and I found that some magnetrons contains berilyum oxide, which is fatal if you breathe it in. I also read that it is dangerous in this way only if it's crushed, then inhaled. (It is also lethal if if you ingest it, but I'm not planning on doing that). Since we stopped using that microwave, I haven't dropped it on the floor or anything like that, so does that mean that it is safe to handle? How could the magnetron become dangerous? What precautions should I take to make sure that I'm safe? AI: Some magnetrons use beryllium oxide as the "ceramic" looking insulators inside of the ring magnets on both the "Stem" and the "Antenna" ends. Reference the image below, the beryllium oxide parts are the pink items in the middle. They are totally inert if undisturbed. Image Source: Toshiba Hokuto Electronics Corporation - Magnetrons for Microwave Oven Not all magnetrons use that for the insulators, but it's virtually impossible to tell if they did so you must assume they do. It has to get airborne to become dangerous. So just don't go crushing and snorting the ceramic dust and you will be fine. If you do happen to break one, don't use a vacuum cleaner, clean up with a damp rag and get ALL of the dust, then dispose of the rag while still wet by putting it in a plastic zip-lock bag. I take apart magnetrons from old microwaves that I get for free and harvest the magnets, they are cool and powerful. I then put that center assembly into a thick plastic zip-lock bag before disposing of it.
H: Power loss in an equally distributed parallel solenoid circuit I have 12 solenoids (12v rated) connected in parallel to two Darlington arrays (ULN2803A). Each solenoid has 88-ohm impedance. The power supply can provide a stable 12V with up to 2A current draw. (See below) The solenoids are controlled with 20ms impulses. When all solenoids are activated, the current draw does not exceed the maximum (2A) and voltage stays stable (slightly above 12V). All solenoids are connected with 0.5mm wires with up to 2-meter length. The problem: When each of the solenoids is activated, they perform as intended. However, when 6 are enabled at the same time, one of them does not work. As the solenoid works when activated on its own, there should not be mechanical issues and is rather a lack of power to activate it. Why would it lack power if the operational voltage is supplied and the power supply maximum current draw is not reached? EDIT: The inputs to Darlington IC are driven by 3.3V signals from an MCU. It is the same solenoid that fails. If 12 are driven at the same time, then multiple solenoids fail. All of them are relatively same distance from GND. (When 6 are driven, #2 fails) EDIT2: After re-measuring voltages across the solenoids: When 12 are active - voltage across each is 10.8V When 6 are active - voltage across each is 11.1V When 1 is active - voltage across it is 11.4V The voltage supplied by the power supply is 12.3V simulate this circuit – Schematic created using CircuitLab AI: \$ I = \frac {V}{R} = \frac {12}{88} < 140 \ \mathrm {mA} \$ Six solenoids will draw \$ 6 \times 140 = 840 \ \mathrm {mA} \$. Figure 1. According to the ULN2003 datasheet you are operating within specifications so that's not your probblem. If we read on we find a possible clue here. Figure 2. As the total emitter current rises so does the input on-voltage. I suspect that what is happening is that the internal chip common GND line voltage is rising with increasing current due to its resistance. As the GND voltage rises then so does the input voltage required to turn on the output transistor. Your 3.3 V control logic is just on the edge of working. Figure 3. The ULN2003's internals. I suspect that there is some kind of race going on with your circuit and that, perhaps you are switching them on sequentially even if very quickly. This could be due to your controller code or due to the internals of the ULN2003 and that's why I wondered if the same solenoid was affected. The ones furthest from the GND pin on the chip die would have the highest emitter voltage. simulate this circuit – Schematic created using CircuitLab Figure 4. Using an additional ULN2003 as an inverter. The option of Figure 4 uses the first ULN as an inverter with pull-up resistors to ensure a good drive voltage to the second driver. Since the first one is switching relatively low currents the input on-voltage will be less than 2.4 V (Figure 2) and will switch reliably. The logic will be inverted, of course. One thing to look out for: since you are having this trouble with one input you may find that the others are barely switching on and that the Darlington transistors are not "hard-on". According to Table 6.6 you should see a maximum of 1 to 1.3 V on the outputs. If this gets higher than that I suspect that the chip will warm up and could get hot to touch.
H: Why the one-hot state encoding has an advantage in decoding compared to binary state encoding? Before I asked the question, I've googled advantages of the one-hot state encoding compared to others such as binary and gray state encoding. I could understand one-hot's advantages and disadvantages over others encoding scheme, such as constant hamming distance (two), fast but requiring an N flops, etc. Many articles mentioned that the one-hot is fast and it suits high-speed design and they also mentioned the reason is that it has very small decoding time compared to gray coding and binary coding. However, I couldn't understand why one-hot decoding is faster compared to others. For example, when we decode the one-hot encoded state machine, it could be implemented by the case statement like below parameter STATE_ONE = 3'b001, STATE_TWO = 3'b010; STATE_THREE = 3'b100; parameter STATE_ONE_ID = 'd0, STATE_TWO_ID = 'd1; STATE_THR_ID = 'd2; reg [2:0] states; switch(1'b1) states[STATE_ONE_ID]: begin end states[STATE_TWO_ID]: begin end ... Also for the binary encoded states we could use below code for decoding parameter STATE_ONE = 2'b00, STATE_TWO = 2'b01, STATE_THREE = 2'b10; case (current_state) STATE_ONE : begin end STATE_TWO : begin end STATE_THREE : begin end .... When we look at the RTL implementing the decoder for one-hot and binary, it looks not really different. I would like to know why one-hot decoding is much faster than the binary case? Is the difference induced when they are synthesized? AI: The thing to remember about Verilog behavioural code is that what may seem like "we could simply do", doesn't actually represent how simple or complex the resulting hardware is. Your case statement is describing the behaviour that you want, but the synthesis tools have to convert it in to logic gates. There is no simple case logic gate. Both infer chains of multiplexers for each signal controlled within. In the one-hot example, the logic checking the state can simply use the one-hot bits for the multiplexing. For the binary-encoded state, the logic has to include a load of decoder logic to convert the binary value to a one-hot value before it can do the multiplexing. TL;DR; The difference basically comes down to where the decoding logic is placed. In one-hot, the logic is placed before the state machine registers which breaks up combinational paths (good). In binary, the logic is placed after the state machine registers forming longer combinational paths (bad).
H: Why does an automatic bluetooth connect fails after some time? I am not sure if this is the right place to ask my question. If not, please let me know a better place to ask it. The question: I have a phone with bluetooth and is connected to a car, so you can use the phone hands-free. And when entering the car with the phone and bluetooth activated, the connection is made automatically. But after some time, this connection is not made any more automatically. You have to pair them again. As far as I know there has neither been a system upgrade on the phone nor a new app was installed. I am not asking for a solution, but for the possible reasons why you have to re-pair the car with the phone after some time (happened already a couple of times, after some months or so). What could be the reason for that to happen? AI: This can be due to many things, without more information it is impossible to pin-point where the problem is coming from. Some common problems might be: Charge up the device you're trying to pair. Some devices have smart power management that may turn off Bluetooth if the battery level is too low. This may cause the connection to terminate unexpectedly. Clear the Bluetooth cache (Android only). Sometimes apps will interfere with Bluetooth operation and clearing the cache can solve the problem. Go Settings > Backup and restart > Reset network settings. Sometimes pairing of another device with your car's Bluetooth may lead to your original device being unpaired. Make sure that no one else is pairing their device with your car. Source: Here
H: Reducing power consumption of an arduino module (mosfet?) I'm trying to reduce power consumption on an MP3 module controlled by an Arduino. The MP3 module specs are: 3.2-5V (typical 4.2V) <1A peak current, 20mA on standby (which is too much on batteries) In my prototype, I use a simple diode to lower the 5V regulated from Arduino by around 1V. I also have 1k resistors to adapt logic level on serial communication. Basically my idea is to use a mosfet as a switch, so that the arduino can enable/disable power supply to the MP3 module, either on high: Or on low side: My questions are: Are the circuits above correct? What are some good options of widely available mosfets? IRL540? How do I pick the gate and pulldown resistor values? In low-side configuration, do I still need the TX/RX resistors? Should I add a ceramic capacitor to eliminate any noise from speakers/sudden change in current, and what sort of value? AI: The first circuit will not work, you need a P-Channel MOSFET to switch the top rail with a pull-up. Note the logic inversion here, you pull the line low to turn on the power. simulate this circuit – Schematic created using CircuitLab You need to add some bulk capacitance to compensate for the diode which will introduce some interesting noise on the power line. The gate resistor needs to be small, the pulldown/pullup needs to be large as shown in the schematic above. In low-side configuration, do I still need the TX/RX resistors? Yes. The resistors prevent the signals from trying to power the device. 1K may actually be small. When you park the player you should also "park" those data lines in whichever state doesn't pass current to it. What are some good options of widely available mosfets? IRL540? Advising on devices is beyond the scope here, but you need a device with a gate threshold under 2.5V here and on on-resistance in the 10s of milliohms range. If the player is drawing 1A that resistance, and power lost in the MOSFET adds up. Should I add a ceramic capacitor to eliminate any noise from speakers/sudden change in current, and what sort of value? Now that is a whole other can of worms. It depends on how the module itself handles power on. You may find it internally sequences power up properly, or you may discover the speaker "pops" every time you turn the module on and off. If the latter is the case you may need to add and control another device to disconnect the speaker while you toggle the power to the player.
H: Is it possible to get a high efficiency buck-boost converter by switching between the two? Many buck-boost converters work by boosting to an intermediate voltage, then bucking to the desired voltage. What if instead, it had a separate buck and boost section in parallel, and switched between the two based on input voltage and desired output voltage. For example, if the converter is set to 15 volts, and it's being supplied with 12 volts, the buck side would be disabled and the boost side would be boosting the 12 volts to 15 volts. Likewise, if you set it for 5 volts, the boost side would be disabled, and the buck side would drop the voltage to 5 volts. Could this lead to high efficiency buck-boost converters? AI: Consider the most efficient type of buck controller; a synchronous type: - Then, in order to turn it off when the parallel boost section becomes active you need a MOSFET on the output like this: - Then consider that you need a couple of MOSFETs (or a MOSFET and diode) for the boost section with it's own inductor and it should become clear that there is a simpler solution namely the buck-boost bridge controller. Take the drawing above and add another MOSFET (shown in red): - Now compare that drawing with this: - And this one to understand how it works: - Explanation here and below is a real chip that does this: - So, in conclusion, if you begin the process of designing a parallel buck-boost, then if you are shrewd enough you'd notice that the best option is make a H bridge controller because it saves on a MOSFET (or diode) and an inductor and is at least as efficient as what you propose.
H: Ideal operational amplifier resistor question My question is: since the same current flows through R1 and R2 the input current is 0, so can we treat them as resistors in series and add them together? If yes, what would be the expressions in terms of Ohm's law? Example of the situation: AI: Yes - you can (you must) consider both resistors in series if you are applying the superposition rule for calculating the closed-loop gain. Based on the assumption that the inverting opamp input has a potential Vn=0 , we have a simple voltage divider consisting of the series R1 and R2: Vn1=Vin[R2/(R1+R2)] setting Vout=0, Vn2=Vout[R1/(R1+R2)] setting Vin=0. Vn=Vn1+Vn2=0 >>>> Vout/Vin=-R2/R1. Comment: Note that the voltage divider rule was derived/formulated using Ohms law and the assumption that the current through R1 and R2 is exactly the same.
H: Raise of TRIAC A1-Gate potential under load after triggering During experiments with a TRIAC I noticed that appart from the triggering signal the A1-Gate voltage remains 0V under no load condition. Under load conditions the A1-Gate voltage remains elevated after triggering up to the moment the TRIAC stops conducting. In the information found so far I find only information on the latching current nothing on the above mentioned effect. Why is this happend or do I make an error in the measuring procedure? AI: Yes, the gate of a thyristor acts as a voltage source when the thyristor is conducting. The voltage is generally a bit higher than the gate trigger voltage and will be of the same polarity as MT2 for a triac. That means that the gate voltage can swing from (say) +1.3V for triggering to (say) -1.4V when it turns on in quadrant IV (and the reverse in quadrant II. Or it can barely change in quadrants I and III. In some cases, impedances connected to the gate of a thyristor can have noticeable effects on the commutation or holding characteristics. Thyristors designed to be reliably commutated directly from the gate were called GTO (Gate Turn Off) devices.
H: Reference ON/OFF voltage from microcontroller Raspberry Pi I am looking for the simplest way to produce reference voltage ON=1.0V and OFF=0.0V from the Raspberry Pi. I have learned that GPIO outputs are not very precise, and according to Exploring the 3.3V Power Rail, 5V power rail is also not very stable. However the document suggests that 3.3V power rail is fairly stable, so I constructed this simple circuit: Of course I would put voltage divider at the end to get the required 1V. The required conditions are at most 1% voltage drift, output current can be small (to be supplied to another operator amplifier). Does this circuit make sense? Is 3.3V power rail stable enough for this purpose? Can you propose another, not too complicated circuit, which would be even better? AI: You could do something like this: simulate this circuit – Schematic created using CircuitLab The ADR510 has an output voltage of 1.0V +/- 0.35% at room temperature. The left circuit will supply up to about 0.9mA with source resistance < 0.3\$\Omega\$ when on, but when it is off it will only be approximately zero (depending on the GPIO low output voltage) and will have a source resistance of 2.2K, so if if any current flows into or out of it, the voltage can be far from zero. The right hand circuit actively clamps the output to ground when GPIO is low. The source of M1 should be connected to analog ground near U2. Choose the MOSFETs to operate reliably from 3.3V Vgs and for M1 to have low enough Rds(on) with 3.3V applied to meet your needs. It's not hard to do better than the 0.3 ohms. With dual P-N MOSFET array: simulate this circuit
H: Design of Arithmetic Section While going through topic on ALU design. I came across the point.i.e Design of Arithmetic Section .But what confused me is what is it's purpose in ALU. I'm thinking it in terms of performing basic arithmetic operations. I've a picture here. I do understand what is it doing logically but failed to know it's inner. Illustration of the application. AI: The diagrams below are from a textbook called Computer Organization and Design: The Hardware Software Interface: (ARM Edition) and this can be found in Appendix A. While it's not the best textbook on the market, there are some pretty good diagrams. These diagrams below will show you exactly what's inside an ALU. An ALU is typically designed with these logical and mathematical operations: AND, OR, ADD, SUBTRACT, SLT (which is a comparison operation "Set if Less Than"), and NOR. This is your typical 1-bit ALU. Very simple. The oval shapes with the 0 and 1 represent a multiplexer that controls whether or not bits A or B negative (by using the 2's compliement). The oval shape with the 0,1,2, and 3 is another MUX that controls which mathematical or logical operation is performed and lastly, that box with the "+" symbol indicates a full adder. There are other components like the "overflow detector" that does exactly what is sounds like. You also have the "set" flag from the SLT operation. This is just a simple 1 bit ALU. Now let's look at a 64-bit ALU. I know what you might be thinking: "Why do the numbers on a63 and b63 look different from the other numbers?" Well, it's because I had to photoshop some typos that were found in the textbook (told you that it's not the most ideal textbook on the market). But I digress... Now, as you can see. There are more operations! Now there's a zero detector with that giant OR gate the end with an inverter. If you're basically adding 0+0, the OR gate at the end with send out a zero then the inverter will make that signal a 1 to send out zero flag. SO TO CONCLUDE WITH YOUR QUESTION: You're right to assume that ALU performs arithmetic operations and that's how computers can add, subtract, compare, etc. particular data that is given by the user. You can also create your own operations like multiplication or division but multiplication or division in binary is essentially recursive addition or subtraction. Hence, when you look at the Assembly language, the multiplication and division operations are pseudo instructions, meaning that it really uses addition and subtraction instructions to perform that instruction and thus the execution time typically takes a slightly longer time to execute.
H: 10 W LED COB on 12 V So say I have a 10 W LED, most have a forward voltage of 9-11 V and a maximum current of 700 mA. If I wanted to connect it up to 13 V (a fully charged SLA battery) couldn't I simply do 11-13=2 (LED drops 11 V and rest is across resistor)then 2/0.7 = 2.85, meaning I would need a 2.8 ohm resistor? And 2 * 0.7 = 1.4, so that means I would need to use a 1.5 W resistor (or greater). Is this all correct? I know it wouldn't be the most efficient but I'm checking to see if I'm understanding this correctly. A more efficient way would be using a current regulated/limited supply, right? AI: Correct. It may be more convenient to build it out of 0.5Ohm 0.5W resistors - multiple resistors in series will spread the load. Another reason to do this is that you can measure the actual current and insert another 0.5R to bring it down. Don't forget to heatsink the LED.
H: How to add external GPIO interrupts on STM32? I'm trying to configure STM32CubeMx to external GPIO interrupts, however, in the NVIC screen I don't see an EXTI... interrupt to be set. What should I do to be able to e.g. check for interrupts on GPIO pin PB14, PB13 and PB12? AI: Click on the pin you want to configure, then select GPIO_EXT# in the dropdown menu, that should enable the EXTI line in the NVIC menu. On the other Hand, CubeMX isnt perfect and it might not have the interrupt on the pins you want well implemented
H: Transistors on a LED strip PCB I replicated the schematic of a cheap LED strip that I bought. Can someone explain me what does the transistors do in that circuit? It has a input voltage of 12 volts. AI: Here's the schematic drawn in a fashion that may be a little more understandable: simulate this circuit – Schematic created using CircuitLab The LEDs are placed into a series chain at the collector of \$Q_1\$. This guarantees that the currents in all three LEDs are identical since it must flow through each of them in series. A BJT collector works a lot like a "current source/sink" so the collector of \$Q_1\$ will adjust its voltage to whatever is required in order to make some specific current flow. The rest of the circuit is about setting up this \$I_{LED}\$ current. To achieve that, \$Q_1\$ needs a source of base current. \$R_2\$ supplies this. \$R_2\$ should be arranged to provide more than is needed, though. Because if it was less, the whole thing wouldn't work. And if it was just right then it would depend on knowing all of the exact and precise details about all the parts. That would mean testing each and every single one of them and calibrating them. And then hoping they didn't drift over time or temperature. So the current in \$R_2\$ should always be arranged to be a lot more, so that an added circuit can provide some control and do so regardless of part variations and temperature. That's the purpose of \$Q_2\$. By placing a small-valued resistor in the emitter leg of \$Q_1\$, all of the LED current now has to also flow through \$R_1\$. This current creates a voltage drop across \$R_1\$. By placing the \$V_{BE}\$ of \$Q_2\$ across this as well, now \$Q_2\$ will now provide some action here. Suppose the LED current is too big. This means the current in \$R_1\$ now causes a voltage drop that greatly exceeds the \$V_{BE}\$ of \$Q_2\$, too. Which means \$Q_2\$ immediately and rapidly moves towards saturation by pulling down hard on its collector voltage. And this also means pulling downward on the base of \$Q_1\$. Doing that, of course, causes the emitter voltage of \$Q_1\$ to also move downward. And this lowers the current in \$R_1\$ back to a sane value bringing \$Q_2\$ back into a more comfortable place and stopping further attempts to pull its collector down. In short, this sets up \$I_{SET}=\frac{V_{BE}}{R_1}\approx 150 \:\text{mA}\$. That current has to come from \$Q_1\$'s collector. So \$Q_1\$ adjusts its collector voltage as needed to achieve that. And the LEDs experience that current as well, now. Meanwhile, \$Q_2\$ is sinking some current. Since the voltage value at the base of \$Q_1\$ is \$V=2\cdot V_{BE}\approx 1.5-1.6\:\text{V}\$, we can expect the current in \$R_2\$ to be \$I_{R_2}=\frac{12\:\text{V}-1.6\:\text{V}}{4.7\:\text{k}\Omega}\approx 2.2\:\text{mA}\$. If the \$\beta\$ of \$Q_1\$ can be counted on being at least \$\beta=100\$ then this means about \$1.5\:\text{mA}\$ goes into the base of \$Q_1\$, leaving about \$700\:\mu\text{A}\$ for the collector of \$Q_2\$. As I see this right now, I feel this might be considered to be a little shy for \$Q_2\$'s collector current. But perhaps they were considering dissipation and power here (see added note below, too.) So keeping this just a little tighter (closer to the needs of \$Q_1\$) might have made some sense in this context. So long as there is sufficient extra here so that \$Q_2\$ will always work well regardless of the specific BJTs the apply. There is still the risk that over all the parts they buy that \$\beta \ge 100\$ for \$Q_1\$. In a circuit like this, I'd probably want to see the analysis across all reasonable variations of \$\beta\$ and \$I_{SAT}\$ for the BJTs and then do simulations over a wide range of ambient and operating temperatures. The LEDs themselves also experience variations regarding their own voltage drop, with temperature changes, just like the BJTs do. And this may mean an increasing \$V_{CE}\$ for \$Q_1\$, leading to more dissipation with \$Q_1\$, shifting it in effect from the LEDs to \$Q_1\$. All boundary cases should have been examined. The BC847, in particular, is probably not so good a choice. If you look at the \$\beta\$ curves for it, it pretty much starts tapping out above some tens of milliamps. By the time you get to \$150\:\text{mA}\$, the typical curves are showing perhaps \$\beta=50\$ or a little less (over temp.) Part variation will mean you probably cannot count on more than \$\beta=35\$ or so, at these currents. And that's a problem. Because then \$R_2\$ will be the limitation and the LED currents will probably be limited to about \$80\:\text{mA}\$ in some cases. Also, then \$Q_2\$ isn't doing anything. So there's no control anymore. So this makes me think that \$Q_2\$ is there more as an over-current protection in cases where they get "good" BC847's with too much \$\beta\$ and that they don't otherwise care. Kind of a safety thing, rather than a control thing. Oh, well. Other than the above question as to the design, temperature is probably a main concern for a circuit like this. The \$V_{BE}\$ of a BJT will vary, losing about \$2\:\text{mV}\$ for every degree Celsius increase in temperature. With these currents, you can be pretty sure of plenty of dissipation here and therefore temperature increases. As the \$V_{BE}\$ declines, the LED current also declines with this circuit. So temperature increases will have the effect of decreasing the current and therefore also the dissipation, too. In this application, where a precision current isn't the goal, this particular behavior is actually a good idea because it means the circuit will over time find a balance and settle down. Good thing.
H: Is it possible to make a coil-less simple Shortwave Receiver I am really a newbie here in Electronics and I'm into it as a hobby/passtime. I was going through this simple coil-less Shortwave Transmitter circuit on a blog. Q1 – BC548, X1 – 10.7 MHz ceramic resonator, 3 terminal, C1 – 10 nF, C2 – 100 nF, C3 – 100 pF, R1 – 10 kOhm, R2 – 150 kOhm, R3 – 1 kOhm My questions are: Does circuits like this work? I want to know how this circuit oscillates at a shortwave frequency without LC Tanks. Is it entirely done by the ceramic resonator? Can I create a similar circuit to receive Shortwave audio signal, which won't use a old-days coil and ferrite rod? If someone could help me with a simple circuit diagram for a shortwave radio receiver which uses such ceramic resonator I will be grateful. I couldn't find it on the web. AI: Ceramic and Crystal resonators are very high Q resonators with equivalent circuits the same as an LC parallel tank cct here. ( Q= 5k to 10k ) They work by passing a bandpass where the phase shift is 180 deg together with C3 such that with the inverting transistor the result is some gain with positive feedback to become an oscillator. Which won't use a old-days coil and ferrite rod? You would need multiple stages of ceramic and crystal bandpass filters. Coils are good for the intial stage LC resonant band filter with high passive gain. Coils are very useful and should not be ignored. But here are some ways of finding more info by adding keywords
H: Basic circuit analysis Note-This is not a homework problem,my question is different.I am currently confused understanding some basic behavior in a circuit. Question-Why A is not 10 volts and B 0 volts?What stops Point A from having voltage 10v? AI: I think you have a problem understanding the nature of electrical potential. The electrical potential is always measured relatively. That means , if we say a battery is 10V, it is measured with respect to it's negative terminal. When it connected to a circuit the potentials can be different because the current is flowing. In your question, you are asked to find the voltage with respect to ground,not the negative terminal of the circuit. So you need to apply V=IR to 2ohm resistor and find the voltage drop across it to calculate the potential at B relative to ground. Then 10V is added to the potential of B to find potential at A. Hope this helps :-)
H: Declaring vector variable in the Verilog not starting with the MSB(e.g,. reg var[0][20]) I am familiar to the syntax described in here net_type [msb:lsb] list_of_net_identifiers; reg [msb:lsb] list_of_register_identifiers; For example, to declare the 32bits memory address, I could use the syntax like reg [31:0] address; Also to declare the memory comprised of the 16 elements which consist of 32 bits each, reg [31:0] mem [15:0]; However, I could frequently face the syntax like below to declare multidimensional arrays. reg [31:0] mem [0:15] I've googled it and found one related article in here. It seems that whether the larger number comes first or not, the first number before the colon (:) is the MSB and the LSB follows. It seems like the difference comes from the preference of coding. However, I think mixing [31:0] and [0:15] style is little bit count-intuitive and confusing even though it depends on the preference. I could see a lot of mixing especially in the declaration of multi-dimensional arrays. Why is it allowed to be used? Is there any advantage that I couldn't notice? AI: In Verilog, the numeric index ordering of a memory is only a matter of preference except when loading the memory from a text file. The first line in the file represents the left index, and like most other languages, starts at 0, so [0:15] is the more common way to declare a memory range. Otherwise there is no advantage one way or the other in how you declare a memory range since you canny only reference one memory element at a time. The same can be said about a vector if you never plan to access individual bits of the vector. But once you do, how you address each bit becomes significant in position of an integral value. In SystemVerilog, the ordering of both the vector width and memory dimensions becomes significant because you can stream memories as a whole and well as selects parts just like you do a vector.
H: ReqForHelp: Understanding Vf relationship to supplied V and A Context: I am a beginner tinkerer at electronics. Conceptually a lot of this is hard for me. But, I plod on. I very much like to use free salvaged components for all sorts of projects, and continue that by salvaging parts for electronics learning/projects. I have what was a solar-charging sensor-activated LED light. It is not intact, the housing is destroyed, but it works. It has in it an LED panel. I want to re-use this panel in other stuff, but have hit a conceptual wall about how to use the vF I measured. The Solar Light uses a 3.7 V rechargeable LiPo as its storage. It has three settings via a click button. The first of these settings will send 53.9 mA at 2.57 v to the LED panel (measured on the wires between the board and the LED panel). The second sends 54.5 mA at 2.57 V. The third sends 11.75 mA at 2.28 V. The first two are "bright" and the last is "dim". I can't see any brightness difference between the two 2.57 V settings even though the A is different. The LED panel has 28 white LEDs wired in parallel. They are the sort that is flat, square, and has no wire leads. Soldered right to a circuitboard with only the tiniest bit of metal showing on two opposing sides to get probes onto. Luckily I can see the traces through the paint to know it is parallel connected. In the highest current setting (54.5 mA at 2.57 v from the board to the entire LED panel) I measured the Vf across multiple individual LEDs, getting a measurement average of .985 V (not mV) with a variance of .015 +/- either side that value. It was starting to get pretty damn hot by the end, since I had to tape over the LEDs to be able to see anything at all as I worked. But now I am confused, unsure, whatever about whether a Vf measurement at 54.5 mA and 2.57 V is useful if I want to have a 5v supply. I tried using a couple of online calculators for parallel LEDs but they will not accept Vf numbers below 1. Request: Could someone help me understand if the Vf I measured is a 'static' thing? I mean, is Vf just Vf no matter what or is Vf dependent on what voltage is supplied to the LED or what current is being supplied? Did I measure a Vf number that is useless to me if I want to use a 3.3 V supply or a 5 V supply? Apologies ahead of time for my thickheadedness, but I just can't make sense of all this. AI: Could someone help me understand if the Vf I measured is a 'static' thing? I mean, is Vf just Vf no matter what or is Vf dependent on what voltage is supplied to the LED or what current is being supplied? Figure 1. Small LED current versus voltage curves. Source: I-V curves. LEDs are non-linear devices. They don't have a linear relationship between current and voltage (I-V) as in the case of resistors. Rather, the current increases exponentially with voltage. If an I-V curve is available you can predict what current will flow at a given voltage. Figure 2. Variation in \$ V_f \$ (forward voltage) for the Cree LTST-C170TBKT white LED. Source: Variations in \$ V_f \$ and binning. Because there can be a wide variation in \$ V_f \$ it is generally not a good idea to directly parallel LEDs as the currents through them will then vary significantly. For low voltage applications a series resistor with each LED will help to balance the currents. For higher voltage applications series connection of the LEDs ensures the same current through each. The second sends 54.5 mA at 2.57 V. The third sends 11.75 mA at 2.28 V. The first two are "bright" and the last is "dim". I can't see any brightness difference between the two 2.57 V settings even though the A is different. The currents through the first and second are too close to perceive a difference. In the highest current setting (54.5 mA at 2.57 v from the board to the entire LED panel) I measured the Vf across one LED, getting a measurement of .986 V. There's something wrong there. You are unlikely to be able to see any light at that low voltage. Try measuring again. Did I measure a Vf number that is useless to me if I want to use a 3.3 V supply or a 5 V supply? Apologies ahead of time for my thickheadedness, but I just can't make sense of all this. 3.3 V is low for white LEDs. (See the binning article for more on that.) The linked articles are mine and may help your understanding.
H: Why two seperataly doped semiconductors cannot be joined to form a junction? My textbook says that when a slab of P-type semiconductor and another slab of N-type semiconductor are joined, they cannot form a junction because no matter however smooth the surface is, there will always be some irregularities present in it and hence continuous contact will not be possible. Why is this continuous contact necessary? Maybe it is necessary for electrons to jump from one slab to another. But in case of two metallic slabs, we can make electricity flow in it. How is this happening? In other words, how are electrons trapped at the interface of a semiconductor but are not at the interface of a metal? AI: At the interface of semiconductors, the silicon structure ends abruptly causing unwanted effect. Silicon atoms will not be able to make the usual 4 covalent bonds, causing electrons to be "trapped" more easily. These trapped states are annoying and will have several undesired effects. For all practical purposes, that destroys the diode characteristic. The whole idea of a PN junction depends on the flow of holes and electrons and how they balance out. However, in metals, there's only one game in town: electrons... Free electrons, and lots of it! Having a few irregularities at a metal contact is not that big of a deal if you have so many free charge carriers floating around. If you sandwich the metal between the junctions, however, you remove any "semiconductor interaction" as you force holes and electrons indiscriminately through that sea of electrons. Holes are immediately recombined, electrons get lost in the sea. That being said, some rectifying behavior can still happen for a metal-semiconductor junction (as is the case for a schottky diode). But it will not be the same curve as a PN junction.
H: Why does low frequency audio take up more power When I say more power, I know that fundamentally it takes more energy to move the speaker cone to produce lower frequency audio. However, in my experiences, higher frequency content takes up something like x10 less current than its lower frequency counterparts, and this is using similar impedance tweeter and woofers from the same amplifier output. I know that the impedance value of the woofer doesn’t drop off that rapidly, so what’s with the way higher power consumption? Does a higher frequency audio signal have a smaller amplitude or something of that nature? I’ve been trying to figure it out for some time but can’t seem to wrap my head around it. Edit 1: To clarify this, impedance of subwoofers or woofers don’t change dramatically for frequencies before or after the resonant spike. So if you’re driving let’s say, a 10W speaker and 2W tweeter with the same amp ie the same voltage, how does the tweeter not get burnt out? Is there just more spectral content below a certain frequency in the overwhelming majority of songs? AI: I know that fundamentally it takes more energy to move the speaker cone to produce lower frequency audio The reason is that loudspeaker efficiency (how much sound pressure you get per Watt electrical input power) is much lower below the resonance frequency of a loudspeaker box. Here's an example curve taken from this article: The Y-axis is SPL and is a dB (power) scale so 10 dB difference means a factor 10 less power. How steep the curve drops below that resonance frequency depends on the type of loudspeaker box, open or closed, if there's a bass reflex port or not. So simply more power is needed to achieve the same sound pressure level at low frequencies compared to high frequencies. What also plays a role is that out ears do not have a flat response. Our ears are most sensitive around 3 kHz. Our ears are quite insensitive to low frequencies so more sound pressure is needed anyway. Music's frequency content reflects that, the low frequency signals are much stronger than the high frequency ones. Edit in response to Edit1: So if you’re driving let’s say, a 10W speaker and 2W tweeter with the same amp ie the same voltage, how does the tweeter not get burnt out? Well, the tweeter would get damaged if you fed the speaker 10 kHz at 10 W. Most tweeters cannot handle that, they can only handle a few Watt. And that's all that is needed, a couple of Watts at 3 kHz is all that is needed to make you cover your ears. At low frequencies it is much different, a couple of Watt at a low frequency isn't that loud. There are two reasons for that: 1) your ears are less sensitive at low frequencies 2) speaker efficiency is a lot less at low frequencies. In music the tweeter does receive a lot less power than a woofer. I would say that 95% or more of the power an audio amplifier has to deliver is in the low frequencies, so up to 200 Hz or so.
H: Net Tie - Design Rule Error in Altium Designer I am designing battery charging system. Datasheet of IC's says "Connection of switching ground to system ground, or any internal ground plane should be single-point." on PCB layout considerations. For this reason I decided to use "Net Tie" between GND and PGND. I created net tie schematic rectangle(pin1 and pin2) and net tie footprint . You can see below. When I am using Net Tie on my PCB I have seen Design Rule Problems. Clearance Constraint(Collision <0.15mm) Between Pad on TopLayer and Track on Toplayer. Short circuit Constraint Between Pad on Toplayer and Track on Toplayer. Picture 1: Picture 2: https://s32.postimg.org/u3ttijdut/footprint.png Note: Track width is 0.255mm (okey for design rule) Picture 3: How can I solve the problems? Do you have any advice for this problem? By the way I checked this pdf: -Net Ties and How to Use Them- https://www.smtnet.com/library/files/upload/NetTies-and-How-to-Use-Them.pdf AI: I solved this problem. This solution may be useful for those who will have the same problem in the future. 1. Firstly you have to be careful in the schematic section. 2. Open Schematic Library 3. Click to your component name(net tie) and you will see Library component properties. 4. Change the "Type" from "Standard" to "Net Tie" . 5. Save 6. Update your footprints. 7. Use it, it will be done. By the way , don't forget to review this pdf: -Net Ties and How to Use Them- https://www.smtnet.com/library/files/upload/NetTies-and-How-to-Use-Them.pdf
H: Ribbon cable PCB connector repair I've been asked to help repair a keyboard. On examining the keyboard I found the issues (circled) in the below image: A tear in the middle of the ribbon cable A tear across the end of the short side of the cable where it entered the PCB connector My original idea to repair this was to desolder the left side, buy a replacement ribbon cable, and then resolder and connect it. Unfortunately I've been unable to locate a replacement part. My only other idea is to remove the insulation over the smaller tear and solder on small wires to bridge the tear. Then to remove the insulation on one side at the end and attempt to reinsert it into the PCB connector. Given the size of this cable I doubt my ability to solder the cables to rejoin them. Does anyone else have any suggestions for an alternate method of repair? AI: Removing the insulation in a reliable way might be very difficult. It may be impossible to have all ffc contacts work at the same time. Also, the cable will be shorter after this operation. Will the length be sufficient after mounting? I'd try to get a ffc-cable with the pitch of the connector on the right and long enough to make all connections on the left. You have different options then. One is, to cut the flat cable along its lenght on the left side, so that every trace has some cm for its own. You can then fold bend and trim the traces to meet the soldering points on the left PCB with minimized deviation and stress. Removing the insulation is neccessary, too. But you see, if it's sufficient when you try to solder it. The other is to extend a complete FFC cable by soldering braided insulated wires to the left side. These are easy to connect to the left PCB.
H: Differences between notch filter designs - use of op-amps I have a question about notch filter designs using op-amps and I hope that you can help. Following this tutorial https://www.electronics-tutorials.ws/filter/band-stop-filter.html, they give examples of different notch filter designs. I'm interested in the designs using 1 and 2 op-amps: Circuit 1: Circuit 2: Besides the fact that values for the resistors and capacitors have been introduced in circuit 2, what is the difference between having one and two op-amps connected in this fashion? What does A2 add to the circuit? It has something to do with the feedback, but I'm not sure how it's affecting the circuit. As a note, I'd like to know more about these circuit elements and be able to figure these things out on my own. So if you have any litterature suggestions, please feel free to share :). AI: This configuration is called "Bootstrapped Twin T Filter". This filter is an active notch filter. A1 OpAmp is a simple voltage follower which will reduce output impedance. A2 is an another voltage follower used for the feedback. The usage of the voltage follower at the feedback is, it won't draw much current from the voltage divider feedback circuit and simply supply a feedback voltage to the filter. This will make the response is more oriented around our target frequency. Texas instruments has a very good explanation about this design here. If you are going to use this design to build a circuit, I strongly suggest to use this. I have used this circuit to build a EEG circuit and had fabulous results! Hope this helps :-)
H: WeMos self-turn-off with N-channel MOSFET I'm trying to build an internet-connected button. When button is pressed, WeMos D1 turns on, makes a network call and then cuts off the power by pulling the irlb8721 gate to LOW. Video: https://photos.app.goo.gl/QEpIgPa3gxHatTyG3 Here's the program I currently use for test: void setup() { pinMode(2, OUTPUT); digitalWrite(2, LOW); // Turn on built-in LED pinMode(5, OUTPUT); digitalWrite(5, HIGH); // Keep ourselves powered after button is released delay(5000); // Do the work, e.g. make a network call digitalWrite(5, LOW); // Cut off the power } void loop() {} The problem is that after the power is turned off, there's still ~20mA current leaking, which is a deal-breaker for a battery-powered button. Disconnecting the pin 5 (D1) gets rid of that for some reason. Resistor between gate and source is 676kOhm, between D1 and gate - 10kOhm. Thanks for your help! AI: I think , because you are using the WeMos to turn itself off, what is happening is as the MOSFET turns off, the voltage on the pin driving it goes up. Or to put it another way, the pin driver is standing on the ground it is trying to pull up. This is what you are trying to do.. simulate this circuit – Schematic created using CircuitLab Without studying the specs on the device that internal pull-up may or may not be present and controllable. At some point it will turn into some form of linear regulator or oscillate. You may have more success using pinMode(5, INPUT); instead of digitalWrite(5, LOW); That will allow the pull-down to do it's job, assuming that pull-up does not exist or can be turned off. If it exists and can not be turned off, you need to significantly reduce the pull-down value and gate resistor and add another one to the switch line and eat the current loss through the divider. simulate this circuit However, either way, cutting it's own throat may be problematic since the device may reset itself during the power down or brown out.
H: Analysis of simple RC circuit For the above circuit, Natural response will be A×exp (-t/RC) where A is any constant. For forced response, capacitor is open circuited consequently, 10K resistor is also removed and thus, forced response is 10Cos (2t). Plugging into the eq. For V (t) and solving at t=0 I get A=-5. But still I am not getting the answer printed in the image. HELP NEEDED! THANK YOU AI: From your comments I'm assuming you already found the differential equation to solve, so I'll start from there: \$v_C + \frac{1}{2}\frac{dv_C}{dt} = 10 \cos(2t)\$ Finding the homogeneous solutions \$v_C + \frac{1}{2}\frac{dv_C}{dt} = 0\$ We can find from \$1 + \frac{1}{2}\lambda = 0\$ that \$\lambda = -2\$. Our homogeneous solution will therefore be: \$v_{C,h} = A\cdot e^{-2t}\$ Finding the particular solution \$v_C + \frac{1}{2}\frac{dv_C}{dt} = f(t)\$ where \$f(t) = 10 \cos(2t)\$ We therefore try a particular solution of \$v_{C,p} = B \cos(kt) + C \sin(kt) + D\$ \$\frac{dv_{C,p}}{dt} = -Bk \sin(kt) + Ck \cos(kt)\$ Now we need to find out \$B, C\$ and \$D\$. \$\left(B\cos(kt) + C\sin(kt) + D\right) + \frac{1}{2}\left(-Bk\sin(kt) + Ck\cos(kt)\right)=10\cos(2t)\$ We find that \$k = 2\$, \$B + C = 10\$, \$B - C = 0\$ and \$D = 0\$. This gives us simply: \$k = 2, B = C = 5, D = 0\$. Finding the total solution The total solution is the sum of the homogeneous and the particular solution. We also still need to match the initial conditions. We find that \$v_C = v_{C,h} + v_{C,p} = Ae^{-2t} + 5\cos(2t) + 5\sin(2t)\$ To find \$A\$ we can plug in our initial condition for \$v_C\$: \$v_C(t=0) = A + 5 = 0 \Rightarrow A = -5\$. So the final solution is \$v_C = -5e^{-2t} + 5\cos(2t) + 5\sin(2t)\$
H: Good book on system identification for non-linear systems? Can anyone suggest a good book that describes how to experimentally identify and describe non-linear behavior in real world systems? I have had good success analyzing systems using linear methods such as fitting transfer functions to real data. However I find that I want to expand my knowledge of how to properly describe non-linear aspects of an unknown system for which a perfect transfer function may not exist. I want to know what tests to do. How to process frequency response chirp data. How to form the chirp to cover largest solution space (ie for a non-linear system the input to output describing function may depend on amplitude of the input signal. How to identify this and capture it in mathematical form??). So what are some good books on this subject that dive into it in depth? AI: The critical book on the matter is Lennart Ljung, "System Identification: Theory for the User". In reality this is an active area of research and you may be better off looking at journals, but it depends on what you want from your models. As you may be aware you can model things on a scale between white-box (models derived from physics) to black-box (models that are based solely on observations of a system). Looking at chirps you may be interested in black-box and the swept-sine technique. Here are some papers that cover interesting parts: Simultaneous measurement of impulse response and distortion with a swept-sine technique, Angelo Farina Further investigations in the emulation of nonlinear systems with volterra series, Lamberto Tronchin and Vanna Lisa Coli These identify nonlinear audio devices that are modelled using Volterra series and similar black-box models.
H: May I replace ICR 18650 22F with 26H in Laptop battery? As in the title, I have a worn laptop battery, made out of 6x SAMSUNG ICR 18650 22F. Now I can't find any 22F near me, nor slightly farther for a sane price. But SAMSUNG ICR 18650 26H are practically swamping me from all sides. I checked the basic specifications and all seems in order apart from one. My main worries are: Are 26H only far cheaper because 22F are out of production (or low priority production) and hence out of stock price difference. Or is there a reason for the 50-100% price difference? Will the chargers, controller, and computer register the battery correctly if it now has 2,6Ah per cell instead of 2,2Ah? I mean, I can get 26H with tabs already on them for still cheaper than ordering 22F (if i don't even count postage to 22F's price) AI: That's hard to tell if it will be a problem. Protection circuits will not mind the capacity difference. Safe voltages remain the same. But since you say it's for a laptop battery, there will be a gas gauge somewhere (probably also in the battery). Depending what type it is and how it's programmed, this gas gauge might not be able to cope with the larger capacity, i.e. report unreliable charge states.
H: How to recognize a Germanium Diode I am in a little trouble and seeking some help. I got a bunch of mixed up diodes from an old collection. I know there are few diodes which are Germanium Diodes. But they look so similar to 1N4148 and similar transparent case diodes. The problem is, the diodes are old (but working) and it's very difficult to read the numbers printed on them. How can I identify and distinguish Germanium diodes? Can I measure something with a multimeter, or create a simple circuit to identify the Germanium diodes. I am looking for identifying diodes like 1N60 and 1N34A. I would highly appreciate your help! AI: Use this schematic to test the diodes. You can easily distinguish Silicon and Germanium Diodes. Silicon diodes should read approx 0.7V and Germanium diodes should read 0.3V. A little difficult to distinguish Schottky diodes though. They should show approx 0.2V which is close to 0.3V. If you have a very stable power supply and a good meter you can distinguish this as well! Good Luck! simulate this circuit – Schematic created using CircuitLab
H: If a sweep generator gets up to 20 GHz, Is it possible they only mean pulsed DC rather than AC? I'm looking at a Wavetek model 8911 10 MHz-20 GHz Programmable Sweep Generator, It says it gets up to 20 gigahertz, but is it possible that it only generates pulsed DC at that frequency as an excuse to say it gets "20 GHz" It's quite expensive and do not want to waste my money... Also, How can I know if a device produces an AC or pulsed DC current. AI: A fine old instrument, I've used one back in the day. If it's working, then yes, it goes to 20GHz. Do be aware that it's a sweep generator, not a synthesiser. That means its output frequency has a tolerance, it's not an accurate multiple of a reference frequency. Its main purpose is to provide a swept frequency out of the front, and a linear ramp out of the back, that you feed into the X input of your XY 'scope. Then with a diode detector feeding the Y scope input, you can draw graphs on the 'scope of response versus frequency, for amplifiers, wideband filters etc. What we used to do before RF network analysers were available. It doesn't work so well for very narrow band filters, due to the limited frequency accuracy. It doesn't produce 10M to 20GHz in one range. It has several oscillators in there covering smaller ranges. On a wide sweep that covers several ranges, it pauses the ramp while the sources switch over, to give joined-up coverage on the 'scope.
H: current source with high resistance load what happen if the load is a high resistance ( let's say 100K ) AI: The more the load resistante increases, the less voltage you´ll have in the 1k resistor, due to the fact you won´t be able to source 1mA as load resistance has increased and power supply is 10V. That 1.6V will reduce as well, and the transistor will be as saturated as possible. Together with all of this, base current will increase as less voltage will be dropped across the 1.6K resistor. And finally, alongside with that, your 1mA current source won´t hold and the current will decrease.
H: Trouble understanding timing simulations in Quartus? I have tried my ALU on the functional simulation and I get the correct waveforms. However, I am confused about how to interpret the timing simulations. What causes the ripples in the carry_out, and zero signals? Also, what causes the delays in the result? AI: From the waveform it looks like you are doing a gate-level simulation. If so what you are seeing is the delay of the signal through the gates. As for the carry: that is why it is called a 'ripple carry' The carry of each stage depends on the carry of all the previous stages. Suppose you add: 1111 with 0001. Stage 0 => 1+1 = 0 carry-out = 1 Stage 1 => 1+0+carry But initial the carry into stage is zero thus stage one first produces: 1+0+0 = 1 carry-out = 0 Some time 'DELTA' later the carry from stage 0 arrives and stage 1 now does: 1+0+1 = 0 carry-out = 1 Stage 2 => 1+0+carry Again initial the carry into the stage is zero thus stage two first produces: 1+0+0 = 1 carry-out = 0 Two 'DELTA' times later the carry from stage 1 arrives and stage 2 now does: 1+0+1 = 0 carry-out = 1 Stage 3 => 1+0+carry But initial the carry into stage is zero thus stage three first produces: 1+0+0 = 1 carry-out = 0 Three 'DELTA' times later the carry from stage 2 arrives and stage 3 now does: 1+0+1 = 0 carry-out = 1 This 'ripple' effect is valid for any cone of logic. Even a carry look-ahead adder needs some time to settle and in the mean time it can change value several times. Your image also shows beautifully why you can not reduce the clock period arbitrarily. You have to wait for the last (slowest) signal to have settled otherwise you clock in a changing, wrong value.
H: Priority MUX - select one of the options based on previous value I have several workers that need to access single bus (memory channel). Several of them can be ready at the same time and I would like them to alternate with writers having priority. I have some pseudo-code with two writers and two readers to showcase but it's not good solution and I need it to scale with arbitrary number of workers (writers and readers come in pairs). I also care about resource utilization. if curr_worker = wr0 then curr_worker <= wr1 when w_rdy[1] = '1' else wr0 when w_rdy[0] = '1' else rd0 when r_rdy[0] = '1' else rd1 when r_rdy[1] = '1' else dummy; else if curr_worker = rd0 then curr_worker <= wr0 when w_rdy[0] = '1' else wr1 when w_rdy[1] = '1' else rd1 when r_rdy[1] = '1' else rd0 when r_rdy[0] = '1' else dummy; else curr_worker <= wr0 when w_rdy[0] = '1' else wr1 when w_rdy[1] = '1' else rd0 when r_rdy[0] = '1' else rd1 when r_rdy[1] = '1' else dummy; end if; And just to be sure, the when-else statement will assign the first option when the condition is true, even if the next condition is also true, correct? AI: Sounds like you want a round robin arbiter. They certainly exist, and with some careful coding it's possible to write a parametrizable one. For example, https://github.com/alexforencich/verilog-axis/blob/master/rtl/arbiter.v is an arbiter I wrote some time ago that can operate in strict priority or in round robin mode, with the number of inputs selectable at synthesis time with a parameter. This module uses this parametrizable priority encoder internally: https://github.com/alexforencich/verilog-axis/blob/master/rtl/priority_encoder.v .
H: 'Simple 220V' relay with a 36W LED Par light I have the relay below, it has 3-7 input voltage needed for the control signal, I'm using a STM32F103C8T6 which is 3.3V and it is able to control the relay successfully. For testing: - The top left connector is connected to 3.3V (from the same STM32 via the horizontal breadboard + line) - The bottom left connector is connected to a small LED, and 470 ohm resistor. This works fine (including the rest of the setup). However, before replacing the left connectors to a 220V (European power) and a 36W (max) LED Power light device, is there anything I should take into account? Since I want to build it first in an enclosure to make sure I cannot touch the 220V connector directly, I would like to have a double check to see if I missed something. AI: Maybe including a Fuse to protect against any unwanted short-circuit, alongside protections for the transistor turn-off and turn-on, like reducing any dv/dt in the transistor( I suppose you´re switchig your relay with a transistor) and the back emf protection diode in parallel with the coil to protect turn-off. As it´s a LED device then I assume the product has its protections included in the circuit. As you´ve stated it´s working fine, I don´t see there would be any problems with your circuit.
H: Understading p_MOSFET driver I have a very dumb question in understanding p-MOSFET driver. simple BJT voltage divider or totem pole driver. first i will ask Simple BJT driver. During turn ON, Q1 is ON, MOSFET gate charges to v1/2 through the path in violet(from V1 current flows to gate to GND). During Turn OFF, Q1 is OFF, MOSFET gate should discharge. discharge current has to flow in R3. we expect potential difference for current to flow, Here from where to where current flows because only V1 is connected there is no GND connection in discharge path. Similarly for totem pole driver also. during discharge path upper transistor will be ON for short time which gives path for MOSFET discharge but there also no potential difference. Please correct if i am not clear.. AI: During turn off, gate is charged to V1 through R3. The resulting gate voltage is V1, so \$V_{GS} = 0\$ During turn on gate is discharged through R2. The resulting gate voltage is V1 / 2, so \$V_{GS} = -\frac{V_1}{2}\$
H: How do I know what power rating pots need? EE novice here. In this post on reddit, the author wonders if 125mW-rated potentiometers are usable in the MFOS Noise Toaster. The Noise Toaster drives analog audio circuitry off a single 9V battery and AFAIK there's no voltage boost circuit involved. I believe that despite the specification of 250mW-rated pots, it may be safe to use 125mW-rated ones. My reasoning is this: 125mW at 9 volts implies ~13.8mA; a 660 ohm resistor on 9V will limit current to that level. A 1K ohm resistor will limit current to ~9mA, for 80mW, comfortably lower than the rated 125mW value. In all but one place in which potentiometers are used in the circuit, I see on the schematic that they are in series with resistors, the smallest of which is 3K, so even with the pot turned to minimum resistance, it will pass no more than 3mA or 27mW. The exception is on the audio output end of the circuit, in the case where the Toaster is driving line out instead of its built-in speaker, and R66 (a 100K pot acting as the master volume control) is turned to zero resistance. There's no fixed resistor in series there, but I think current should be limited in this case by the Q7 FET and/or the U2-A op-amp feeding it. So my questions are: Is my overall analysis reasonable, or am I misunderstanding things? Could either the LM324 op-amp or the 2N5457 transistor drive more than 13mA through R66? Would a 1K resistor in series with R66 make it safe to use 125mW-rated pots throughout? AI: This answer will smell like a comment with a tad of answer-ish elements. It looks like both of those on reddit + you are.. reasoning in a weird manner. I'll try to re-tell the story with other parameters that behave the same way. Power = Voltage × Current Area = Width × Height Resistance = Ω = V/I Aspect ratio = W/H So okay, someone is wondering if a 250 cm² frame is good enough for framing a picture. It is known that the width of the picture and the frame is 9 cm. Someone else is saying that 125 cm² is good enough. You are coming here trying to reason why using a 125 cm² frame is appropriate for our picture. Not even once are you or they talking about the height (current), or the aspect ratio of the picture (resistance). Don't you think that those two properties are vital to choosing the frame size? Right now you've implied that since the frame is 125 cm², then the picture is also 125 cm². Which doesn't make sense. Just because the areas are equal doesn't mean the length's of the sides are okay. 10 cm × 12.5 cm = 125 cm². 5 cm × 25 cm = 125 cm². In other words, you shouldn't say that the resistance of the pot is 660 Ω, or 1 kΩ or whatever you are trying to imply. Start talking about the actual resistance of the potentiometers. Without that information you won't be able to make any sane decision. So just because the mW rating is okay doesn't mean that the resistance and the rated V & rated C is okay. If you're using a 1 kΩ pot rated for 125 mW, then it is rated for ~11.2 V and ~11.2 mA. Perhaps it's a 10 kΩ pot, or a 50 kΩ pot. If you got a picture that is 9 cm wide that has an aspect ratio of 3, then you get a frame that is 9 cm wide and 3 cm high. Knowing the area of a picture doesn't tell you anything about the length of the sides or the aspect ratio. A pot looks like this: \$(A) -R_1- (\text{Wiper}) -R_2-(B)\$ The (A) is one pin, the (wiper) is another pin and (B) is the third pin. \$R_1 + R_2 = \$ some fixed value. Say 1 kΩ. By rotating the wiper you are decreasing one resistance and increasing the other by the same amount. This means that if you hook up something between (A) and (B) then you will always read the same resistance (in an ideal world). So if a 100 kΩ pot has (A) connected to ground and (B) connected to 9 V, then this potentiometer will dissipate \$\frac{9\text{V}^2}{100 \text{ kΩ}}=0.81\text{ mW}\$ Time to turn this into a proper answer once some more information came to light. Is my overall analysis reasonable, or am I misunderstanding things? Yes, it's reasonable. Now when I know that 100 kΩ was fixed and known to you, then you used the correct way to solve the maximum power issue. Could either the LM324 op-amp or the 2N5457 transistor push more than 13mA? The Datasheet to LM324 states that it can source at least 20 mA and sink at least 10 mA. So yeah.. it can "push" more than 13 mA if it means current going out from the LM324. The Datasheet to 2N5457 states that its minimum current is 1 mA and maximum 5 mA. If you would use 13 JFET's in parallel then you could bring the minimum current to 13 mA.. but... that sounds silly. So the LM324 can "push" if it means current going out, and the 2N5457 cannot "push" more than 13 mA. Would a 1K resistor in series with R66 make it safe to use 125mW-rated pots throughout? Yes it would.
H: Micro USB Type B Splitter? Is there a way to connect multiple (around 3-10) devices with a single micro usb-b cable? Planning to do something similar to this: With the end result to hopefully have all devices synchronized so that navigating to one site on one device will in turn navigate to that site on all devices. Also sorry if it's the wrong section of StackExchange or if I've used the incorrect tags :) AI: If you have some other data communication protocol (WiFi, Bluetooth) and don't need USB data transfer, then yes, you take a single 5V@15A DC power supply (if you need 10 devices to feed), and make 10 USB Type micro-B cables, connecting all red wires (VBUS) to +5V, and all black wires (ground) to 0 V. If you also connect all green and white wires together (NOTE1), all your devices will sense Chinese-style (most common) charger signature, and will more or less happily power-up and charge. NOTE1: you might need to connect green-white wires individually per each forked cable, who knows how your devices determine the D+/D- short, to avoid possible interference.
H: Nyquist Sampling Rate Problem I am really confused with the above problem. I doubt the solution. According to me, Sampling Frequency of x(t) = HCF(5,12.5) = HCF(5,25)/LCM(1,2) = 5/2 = 2.5Hz Sampling Frequency of y(t) = 3x2.5 = 7.5Hz Nyquist Sampling Rate = 2 * 7.5 = 15Hz Where I am doing the mistake? Please help me with this problem. EDIT: I got confused because I was taught the following example in class and I tried to use the same here. Q Find the fundamental time period for x(t) = sin(22pit)+sin(7pit+30) f = HCF(11,7/2) = HCF(11,7)/LCM(1,2) = 1/2 T = LCM(1/11,2/7) = LCM(1,2)/HCF(11,7) = 2 So, Fundamental Time Period = 2 using either of the above methods. Is there a difference between sampling frequency and the fundamental frequency ? AI: The solution is correct, here's how I prefer to solve it. Understand that if we can correctly identify frequency A, and A is greater than frequency B, then you can also correctly identify frequency B. Here, \$25\pi t\$ is greater than \$10\pi t\$ so we can ignore the \$10\pi t\$ and only focus on the \$25\pi t\$. Delays doesn't affect the frequency, so we can ignore the \$+9\$ in the \$y(t)\$ function. At \$t=1\$, one second has passed and we can read the data straight out from the \$e^{i25 \pi t}\$, if we plug in the \$3\$ from \$y(t)=x(3t)\$ we get \$e^{i75 \pi}\$ One revolution is \$ 2\pi\$, this means we'll divide \$75\pi\$ by \$2\pi\$ to get \$37.5 \text{ Hz}\$. And then we multiply \$37.5 \text{ Hz}\$ by \$2\$ for the Nyquist frequency which is 75 Hz => 75 samples / sec.
H: Why is the inverter need in this ALU? I'm studying computer architecture and I wonder why the inverter is needed in this ALU? AI: Inversion of an input is used in some operations, including but not limited to subtraction, XNOR, and of course inversion. It can also simplify some operations when used as an input of an AND or OR gate.
H: Why is the load on a darlington pair put before the transistors rather than after? Forgive my lack of understanding and if this is answered somewhere I have not found, but... I have seen lots of answers with regard to how to use a darlington pair to switch, say, a 12V 1A load using a 3.3V 50mA signal (just an example, not all values are determined for my application,) but what I'm not clear on is why in the solutions using a darlington pair, the load is always placed on the high side of the transistor(s). While switching the low side is common and functionally fine, in my application, for safety reasons, I would prefer to switch the high side of the load. So, is it ok to put the load on the output of a Darlington Pair Transistor? AI: It depends on how well the darlington transistor is heatsinked, and whether its extra voltage drop matters. When a darlington pair is used with a collector load, the voltage drop is around 1v, that's a VBE of around 0.7v for the first transistor, and a VCEsat for the second. When a darlington pair is used with an emitter load, the voltage drop is in the range of 1.5v, a VBE for each transistor, and often a bit more depending on how hard they need to be driven. That's assuming the base voltage can be taken all the way to the positive rail. If it can't, and all devices need some drop across them to source current, then that adds directly extra voltage drop to the output. Increased voltage drop means not only lower load voltage, but also increased darlington heating. There are PNP darlingtons available, so that you can do positive side switching with the load in the collector.
H: Voltage drop across a resistor I have been trying to figure this out, kinda a amateur question actually! Looking at the above circuit diagram am wondering why there is no voltage drop across the resistor R1? A part of voltage should be dropped out by the R1 isn't it? But the simulation in Proteus shows that the Voltage across both R1 and R2 remains the same. Can anyone help me out on this one please? AI: I’m going to try to combine the other two answers and bring it down in level just a notch. All of this, as Innacio noted, are based on a simulation of ideal components. In real life, the results are slightly different. First, because the voltage is DC, the capacitor will fully charge and effectively become an open circuit. Just pretend it isn’t there. The volt meters are infinite impedance and so draw zero current. Now look at Ohm’s law: voltage = current x resistance. Because the current draw is zero, the voltage drop over each resistor is also zero. Thus, you see 9 volts at both nodes. In real life, this will be different, though maybe not enough to see. The cap will have leakage current, effectively becoming another resistor. However, the effective resistance may be large enough to dwarf the actual resistors. Also, the volt meters will draw some current, but again, it may be so small that it has little effect.
H: Diode in MOSFET symbol Is there any difference between MOSFETs with a diode in their symbol and the HEXFET? Also, is there a difference between MOSFETs with a diode in their symbol and other MOSFETs? AI: MOSFETs with a diode in their symbol are power MOSFETs, i.e. a class of MOSFETs whose structure has the channel between source and drain oriented "vertically" in the planar structure of the chip. They are sometimes also called vertical MOSFETs for this reason, and they are designated also by the acronyms DMOS, VMOS or VDMOS (these acronyms refer to the shape of the structure viewed in the cross-section of the chip or to the fact that the structure is vertical). This allows greater power dissipation and handling of higher power, compared to "older" lateral MOSFETs, whose channel "lies flat" on the chip surface, like the following image shows: The vertical structure implies that a parasitic diode is formed across source and drain, that's why that diode is almost always depicted in the symbol. Power MOSFETs comprise a large array of specific technologies, developed by individual manufacturer, which go under a plethora of trademark names, such as: HEXFET, TRENCHMOS, etc.. They are all power MOSFETs and they share the same symbol. HEXFET is just the trade mark name of a power MOSFET by International Rectifier, so there is no difference between a power MOSFET and an HEXFET in the sense that an HEXFET is just a power MOSFET produced using a specific proprietary technology. Note that, in reality, a power MOSFET (intended as a discrete device in a package) is made up of several individual MOSFETs (called cells) connected in parallel inside the chip. This is done to optimize efficiency and power handling capability of the device. Keep in mind that "power MOSFET" doesn't necessarily mean "high power". The term was coined when the only MOSFETs available where tiny devices that could handle only milliwatts of power, therefore when the new technology became available they were dubbed "power MOSFETs" because they could handle much more power. Taking as an example jellybean devices common nowadays, the 2N7000 is still a power MOSFET even if it can handle only 350mW max, whereas the IRFZ44N can handle 94W! Nowadays "older" lateral MOSFETs are very specialized devices, rarely used as discrete components. Instead, they are used heavily in digital logic: the ubiquitous CMOS technology, which probably covers 99% of modern digital technology, makes use of complementary MOSFET (P-channel and N-channel) transistors as basic building blocks. Note that I keep saying "older" lateral MOSFET, this is to avoid confusion with a more modern technology used to make power MOSFET, which employs a lateral (i.e. non-vertical) structure. These are devices optimized for power linear applications (i.e. where the transistor works as an amplifier and not as a switch), whereas the classic vertical power MOSFET is more suited for switching applications. EDIT (to answer a doubt expressed in comments and clarify some points) The choice of the symbol of the diode, rectifier vs. Zener, is somewhat arbitrary. The Zener symbol is chosen, most probably, to highlight the fact that, even when the MOSFET is OFF, there is a limitation on max Vds because of that diode entering breakdown. Many devices are characterized in that sense. See for example the 2N7000 datasheet I linked above (yellow emphasis mine): As with any diode, bringing the device into breakdown put you at risk of damaging it. Entering breakdown is not in itself harmful, but in that region the current increases very quickly and the dissipated power consequently, too. Actual Zener diodes are well characterized and their breakdown voltage is specified with a well defined range, therefore you can always control and limit the current so that the power doesn't exceed the max ratings of the device. In a MOSFET, or other non-Zener diodes, the BD voltage is usually given as a minimum value, i.e. they give you that value so as to guarantee a maximum safe value for Vds. They don't specify a max BD-voltage value. This means that, taking that 2N7000G as an example, you may enter breakdown at (say) 60V , 70V or even at 80V. Therefore you have no means, reading the datasheet, to guarantee that the power dissipation is under control: if you apply 65V, for example, you could have barely entered BD, so that the VI product is smallish and can be handled by the device, or you can be in full BD, where the current is huge and the VI product exceed the device ratings.
H: Is mains 220 VAC rms, peak to peak or max amplitude? Please excuse me for such a basic question, but I can't seem to find a clear answer to it. When we say that mains power is 220 VAC, or 230, 110 or whatever value, is that the signal's amplitude, the difference from peak to peak or the RMS value? AI: When we talk about AC voltages, if we don't explicitly state RMS then we imply RMS (unless some other term is used like peak or peak-to-peak).
H: GPIO output corrector It was suggested to me that GPIO outputs on Raspberry cannot be used as a reference voltage: for example GPIO TRUE might give only 3.2V and GPIO FALSE could give 0.1V. On the other hand, according to Exploring the 3.3V Power Rail 3.3V rail should be rather stable . What I want to do is to correct GPIO output in a way that it really returns something very close to rail voltage (3.3V) when TRUE and something really close to ground voltage (0.0V) when FALSE: I would be satisfied also with the inverted logic solution, i.e. that it returns something very close to rail voltage (3.3V) when FALSE and something very close to ground voltage (0.0V) TRUE. I know that in principle inverters should do such a job, but I don't know how precise they are. I have several questions: What is the simplest circuit do the job? Would ready-made inverters be good enough for inverted logic solution? Specification for the circuit: current output less than 1 mA, TRUE/FALSE or FALSE/TRUE voltages on OUT less than 0.04V from rail voltage and ground voltage. I already assumed (hopefully I am right) that rail voltage is within few percents of 3.3V. @Andy aka's solution: Reference output voltage can be regulated between 0V and 1V. For GPIO=FALSE, OUT equals reference voltage. For GPIO=TRUE, OUT is (according to my calculations) less than 1 uV. (Could you please confirm this is your proposed solution?) AI: From a comment by the OP: - I need relatively precise reference voltage for current controller. With GPIO ON voltage should be, say, 0.5V, for GPIO OFF volrage should be 0 V, which would stop current completely. Negative logic is also OK. This implies to me you would use a resistor potential divider to create 0.5 volts from 3.3 volts so... The simplest solution would be to hard-connect the top resistor to +3.3 volts and use an N channel MOSFET to clamp the reference input (previously 0.5 volts) to 0 volts.
H: Connecting a PhilipsHue led output to a relay I followed an Instructables tutorial to get some understanding of what pins can be used from the Hue light bulb. Out of all the "exposed" pins, I focused on TP1 - Ground TP6 - LED output (basically the top-left connector pins) So my end-game is to have a relay controlled by the Hue app and turn on/off a 220VAC circuit (low power consumption, so a 10A/250V solid state relay like the one mentioned in the tutorial is just fine: Fotek SSR-40 DA-H) But when measuring TP6, i found that it outputs 9VDC when the LEDs are off and a maximum of 57.7VDC when the LEDs are at 100%. This is probably because i'm using newer bulbs than the ones used in the tutorial (i also didn't have the big black rubbery insulation on the circuit board) I figured that the next steps would be offset the output voltage so instead of 9-57.7 i would get 0-48.8 (by using an OpAmp) use a voltage divider to get a 0-12 VDC output (so like a 1/4 divider) Is this the right way to go? If yes, can you recommend an OpAmp that would do the job and some resistor characteristics i should consider (i'm mainly concerned about heating)? AI: simulate this circuit – Schematic created using CircuitLab From the information you have provided a Zener diode would suffice. It won't conduct until the voltage exceeds the breakdown voltage. I'm suggesting a 15 V type to give a bit of margin since you have plenty of "headroom". Power dissipation will be \$ V_Z \times I_{IN} \$ so do your calcultations!
H: How to analyze analog output waveform in Modelsim I want to analyze an output sine signal from my VHDL code in Modelsim. I have created a project and simulated the code. However the outout in the "Wave" window is in binary form represented as a 32 bit output. I tried to change my signal as a Analog signal in wave properties but nothing really changed. I would like to se an analog output signal in decimal format which resemples a sine wave. How do I do this? AI: Yes, you can. On the Wave window, select your signals and right-click Under 'Format', select 'Analog' to see your signal as an analogue wave. If you have a lot of data, you could write VHDL to write your values data into an output file in .CSV format (Comma-Separated Values). Then you could import that .CSV file into Excel and plot a graph from it.
H: Full AM signal vs DSB-SC I conducted an experiment where I constructed a Full AM signal and changed the message signals amplitude to change the modulation index of the Full AM signal. I then increased the amplitude of the modulator chip until a DSB-SC signal was made. Although they look really similar, I was wondering what the difference between them is. It is clear from the pictures that the amplitude of the DSB-SC signal is lower than the full AM. I also understand that the DSB-SC signal does not carry a carrier signal. Can somebody tell me what the difference of the two pictures I have uploaded are please? AI: Look at the two closely. There is a significant difference, even if just looking at the amplitude envelope. Look at just the top envelope of the AM signal. It is the same sine as the carried signal. However, note that the tops of the other signal are effectively the absolute value of the sine. This results in two obvious differences you really should be able to spot. The dips are "sharp", and the frequency is double. What you can't see on the scope at this magnification is that the lower carrier has its phase flipped 180° every hump. One way to think of this is over-driven AM. When the negative peaks of the carried wave go lower than what causes the carrier to have 0 amplitude, it causes a sortof "negative amplitude" which is the carrier with inverted phase. To get some intuition, start with nearly 100% modulated AM, then crank up the amplitude of the carried signal and see what happens at the negative peaks. As the carrier gets multiplied by a negative number, it's phase gets flipped. As the magnitude of that negative number increases, the amplitude of the carrier increases, but still with flipped phase. This is exactly what you should expect to happen when a sine (the carrier) is multiplied by a negative number.
H: LM317 voltage regulator with high current boost by 3 npn transistors I am building a rectifier (~230V+-10%/50Hz) with voltage regulator from scratch. My output needs to be 3.3V with a maximum current(before short-circuit protection trigger) of 16A. I have chosen three npn transistors in parallel to boost my current (2N3055) up to 16 amps and calculated respective resistors, powers, temperatures in order to work properly with real elements. However I experience problems in my simulations and the output is nowhere near what I expect. Vin is what I will have after my transformer, rectifier and LC filter- around 7V with 80mV pulses. Ic across each transistors should be~5.33A. However during a simulation with a load of 0.22 Ohms, a current of 15A is expected but this is what I get. Vout is around 2.1V and Iout is 9.45A. Why so? I also want to add a short-circuit protection that is triggered by Iout greater than 16A. I guess it should be with a pnp transistor and a resistor but I am not sure how to calculate and where to connect it. Here is a 2N3055 datasheet 2N3055 and LM317 datasheet LM317 AI: Vout is around 2.1V and Iout is 9.45A. Why so? Your feedback is being taken from the OUT pin of the regulator chip. The voltage there is about 3.3 V. But from there to the load, there is a Vbe drop (probably 0.8-1.0 V with these kind of currents), and a drop across the 0.11 ohm resistors (0.5-0.6 V when you get to the load current you want). \$3.3-1.5 \approx 1.8\$, but the output will be a bit higher since reducing the output voltage also reduces the load current. So your result is roughly in the range you should expect. Try taking feedback from the actual node where you want to regulate the voltage (it would be easier to say which one if you labelled your nodes).
H: Bit sequence from USB HID Joystick I wanted to connect a USB HID Controller to an Arduino and use it to control a Remote Controlled car. But I'm unable to find what bit sequence is sent when a button is pressed on the joystick. I would like to find what that sequence is. I'm running a dual-boot system with both Linux and Windows, so software that works on either one is fine. AI: But I'm unable to find what bit sequence is sent when a button is pressed on the joystick. I would like to find what that sequence is. There isn't one. USB devices operate on a polling basis. They do not communicate over the bus outside of a transaction from the host. Getting to the point of communicating with the device takes a significant amount of work to enumerate the device and configure it with an address. Even once that's all done, there isn't a specific message used when a button is pressed. The status report response from the device will have a field which includes a bitmap representing the buttons on the mouse -- one of the bits in that field will be set when the button is being held down, and clear when it is not. Since the Arduino lacks a USB host peripheral, and runs at a relatively low speed (8-16 MHz) compared to USB line rate (1.5 or 12 Mbit/sec), it will be excruciatingly difficult, if not impossible, to implement a USB host on an Arduino device. Use an accessory which implements a USB host for you, like the USB Host Shield, or use a different microcontroller which supports USB host operation.
H: Need help understanding the status output generation of an ALU I'm currently trying to implement a simple processor using Verilog in a FPGA. I'm using Mic - 1 architecture as a reference model. The thing I can't understand is the ALU is generating a "status" output when a certain operation is completed. How the ALU exactly know a certain operation is completed? Because each operation may take different number of clock cycles to complete. For example multiplication may take many clock cycles than single addition because multiplication need several bit shifting and additions. Is this time delay should be pre-defined in ALU? MIC-1 architecture is shown below. AI: Your ALU will have a state machine which performs any multi-cycle operations. This state machine is what controls the 'end' of an instruction. There is no reason that operations must take more than one cycle, but as soon as you permit this, you need to track phases of a single instruction. You might also want to consider the impact of stalling on either instruction fetch or data load/store (since you are soon likely to come across bus infrastructure which involves arbitration) - these can maybe give you some ideas about what can work in 'lock-step' and when you need to wait. The architecture you show here looks like a single stage, with no pipelining. There is no reason why your ALU can't consume multiple cycles, and allow the rest of the machine to advance only once it's stable. A multi-cycle stage will either be clocked (shift/operate for each bit), or a single deep combinatorial path (which will be harder from a synthesis constraint point of view). Typically, you might try and make your slowest path an easy multiple of the input, and maybe stall for 3 cycles (as an example). Sometimes you can look at a couple of input data bits to decide how much time the logic needs to become stable.
H: Do waterproof RJ45 connectors with integrated magnetics exist? I've been searching for days and can't seem to find a rugged RJ-45 Ethernet connector with integrated magnetics. Probably it would be good enough if the connector was just potted so water cannot leak through. Is there a reason why such items don't exist? AI: Probably not. Waterproof connectors are typically mounted to a bulkhead and Ethernet connectors with integrated magnetics have to be mounted close to the PHY chip. Ruggedized RJ45 is kind of a sloppy thing anyway. People who care about ruggedized connectors (and care a bit less about cost) often use a different connector than RJ45 for 'Ethernet' (such as M12/military circular connectors).
H: Is it possible to recover an over-modulated waveform? Is it possible to recover an over-modulated waveform? I have searched the net up and down but I constantly come across the same answer 'It is not possible to fully recover a full AM signal using an envelope detector'. I do understand this as an envelope detector only tracks the positive parts of the signal. So how do we track/demodulate an over-modulated waveform? AI: I'm no amateur radio person, just someone who took a bad course in telecommunications, so take this answer with a pinch of salt. I'll just explain how I'd do it if I was given this task. Alright, let's go through how an AM signal might look like. \$x(t)=(1-\sin(\omega t))×\sin(\omega_ct)\$ The \$\omega_c\$ is the carrier frequency. This will result in a signal that looks something like this: And you can use an envelope detector to convert it back to \$\sin(\omega t)\$, nothing weird so far. But let's over modulate the signal like this: \$x(t)=(0.5-\sin(\omega t))×\sin(\omega_ct)\$ Which will result in a signal that looks like this: Now, how can we convert this one to something like the previous one so we can yet again apply an envelope detector to acquire our \$\sin(\omega t)\$? Well, if you remember your trigonometry then you will remember that the only way to get a DC term when multiplying sine waves together is if they have the same frequency. In other words, on the receiver side you will generate the carrier signal and match the phase of the signal to maximize the DC value. So let's say we got a local oscillator in phase with the carrier and oscillating at the same frequency, that would look like this: \$\small{y(t)=x(t)×\sin(\omega_ct)=(0.5-\sin(\omega t))×\sin(\omega_ct)^2=(0.5-\sin(\omega t))\frac{1-\cos(2\omega_c t)}{2}}\$ \$y(t)\$ would look something like this: Now you can apply an envelope detector and get a clipped sine wave, you won't get the negative part. With some filters the clipped sine wave can look more like a regular sine wave. There's probably something very obvious that I've missed. FYI I haven't visited your other 2 questions and read their answers that apparently can solve your problem. Edit: Why yes of course, just add a LP filter, add some DC value and there you go.
H: High Step-up Ratio DC-DC Convert (12V-to-150V) - Boost, Flyback, Coupled-inductor Boost? I am trying to design an on board DC DC converter that generates 150V from a 12V input. I'd like to design it to handle at least a 30-50mA load, but in reality the load will most likely be smaller. The output does not need to be isolated - it can share a common ground with the input. The other circuitry on the PCB is relatively sensitive to noise, so conducted and radiated emissions are a concern with whatever design I end up choosing. My first idea was to use a simple boost convert topology, but the step-up ratio seems to be close to the upper end of what most off the shelf boost controller ICs can handle. For a boost converter, DutyCycle = 1 - (Vin/Vout) = 1 - (12V/150V) = 92%. Some controllers might be able to produce a 92% duty cycle, but I'd like to have some more margin in the design, so a simple boost probably won't work. I've looked into some alternatives, but I don't have enough experience with any of the more complicated switcher designs to really understand the pros and cons. Here's a list of potential options: -Flyback: Flyback seems like the most straight forward way to generate the 150V. I've designed a 24V to 70V flyback using a dedicated flyback controller from Linear Tech with an off the shelf transformer designed for flyback applications. I would imagine I could pretty easily design a flyback circuit for my application assuming I could find a transformer with a high enough winding ratio. My concern with using a flyback is the noise that it generates. I know there are ways to suppress some of the noise in a flyback, but, from what I understand, a flyback is inherently pretty noisy. -Coupled-Inductor Boost Converter: There are some application notes out there that detail using a coupled inductor in a boost setup, which allows higher step-up ratios with smaller duty cycles (https://www.onsemi.com/pub/collateral/an-5081.pdf). I am also concerned with noise in this topology, since it seems like any leakage inductance would cause unwanted emissions that would be difficult to contain. The switch and diode would need to be rated for a relatively high voltage, but that's not really much of a concern for me. -SEPIC: To be honest, I don't know much about SEPIC converters, so I can't really speak to the potential pros and cons. I don't even know if it can product the step-up ratio that I need, I just wanted to get it on the list anyway in case someone has more insight. -SEPIC Multiplied Boost: I found an app note from Analog Devices that describes a topology that they call "SEPIC Multiplied Boost Converter" (http://www.analog.com/media/en/technical-documentation/application-notes/AN-1126.pdf). I could not find any other information on the internet about this topology, so it's unclear if it's been widely adopted, or still just exists as a curiosity in some random app note, but it looks like a good candidate. It would not suffer the effects of leakage inductance, so there would be less noise. I am a little hesitant however, since it seems like a moderately novel and complex design with not a lot of documentation out there. -Others? My question is, for someone with more experience in designing switchers and power electronics, how would you approach this problem? What topologies would you use? Are my concerns over noise and high duty cycles valid? AI: As you've already stated yourself, you'll have a hard time finding an OTS flyback transformer. Custom made and I'd prefer this solution too. Another approach that I would personally pursue, is a 2-stage boost converter. Duty factors are within reason and first step will be easy. Second step in principle is just as easy, but the trouble will be to find a controller that can handle the high output voltages. Little chance of finding one with integrated FET. This approach can use shielded inductors. Small if high switching frequency is used.
H: Digital Potentiometer Capacitive Feedthrough? I am trying to use a digital potentiometer to reduce the amplitude of a square wave. I am getting what appears to be capacitive feedthrough on the rising and falling edges. This is getting amplified by later stages and causing glitches in the output. Looking at the scope screen shot below, the cyan trace is the ~12v/us square wave input and the yellow trace is the wiper output of a MAX5387 100K digital pot set for about a 96K : 4K divide. After recovering from the initial glitch, the yellow trace exponentially settles to the expected value after about 4uSec. I have looked up capacitive feedthrough for digital pots and have not found anything. It is not spec'd in any datasheets that I have seen. ADI lists digital feedthrough for some of it's pots, but that's about it. I assume that digital pot capacitive feedthrough is a known phenomenon. Any idea which particular pots minimize it or how to mitigate it in general? At this point my only option seems to be slew rate limiting the input square wave to a slope similar to the post glitch RC slope. This takes an extra op amp stage. I don't want to just put an RC on the input, since I need the edges to be symmetric and as fast as possible. Any other ideas on how to avoid the initial glitch? AI: Some capacitive coupling should occur, although it depends on the internal circuitry. The datasheet of your digipot does not show frequency response curves. Also, high resistance values make parasitic capacitance more problematic. This one does show frequency response, and the high frequency part of the graph does not seem to do anything funny. I remember seeing digipots with a peak in the HF part of the frequency response, but can't find a datasheet illustrating this right now. Thus I would recommend selecting a digipot with lower resistance, and one that has a specified frequency response, without a HF peak.
H: Reading from Analog Magnetometer with Raspberry Pi I'm trying to read an analog magnetometer using the Raspberry Pi Zero. I never worked with the Pi directly before; it was always through an Arduino hooked up to the Raspberry Pi. However, through research, I found that the Pi doesn't really play well with analog sensors, so I have to convert the analog signal to a digital one in order for the Pi to read it. Aside from reading it out through an Arduino, Adafruit mentioned that you can wire your analog sensor to the MCP3008 to convert it to a digital input. A very rough schematic is shown below: simulate this circuit – Schematic created using CircuitLab My choice of magnetometer aside, what design considerations should I take into account when working with ADCs? Do I need to do anything to ensure that the clock and DIN/DOUT lines are stable? Is it advisable to have AGND and DGND be hooked up to the same GND plane with the Pi? This circuit feels a little too simplistic, so is there anything else that is required to make this work? Just in case anyone asks, I'm using a magnetometer as part of a project to measure a 10 tesla field and record the data with a Raspberry Pi. AI: This isn't specifically for ADCs, but for mixing analog and digital circuits in the same device. Digital circuits are usually fairly noisy, and you want to keep that noise away from the analog part of your circuit as much as possible, and part of that is keeping the two systems separate. Firstly, you want to ensure that you have good decoupling on your power rails, and it looks like you've done that. Second, you want to keep the digital signals (in your case the data lines and especially the clock) away from any analog signals if at all possible to minimize crosstalk. Finally, you want to separate the digital power supply if you can, and have a single connection to a rail if you can't. In your case, this would mean connecting the AGND and ground for your magnetometer together, along with decoupling components, and having a single wire or trace connecting to the digital ground, same for power. Depending on how accurate you're trying to be, these may not really be necessary, but they do help for more demanding analog circuits. Edit: Regarding the single ground connection, imagine you have an analog ground plane and a digital ground plane, and your analog plane connects to the digital plane at a place where there is 0.1V noise. Assuming you have good decoupling, this shouldn't be an issue because your Vcc rail will have the same noise placed on it, and your analog signal will only see the difference between Gnd and Vcc, which should be fairly stable. Your analog signals are going to have 0.1V of noise referenced to the power supply/digital ground, but since the noise is the same for the entire analog circuitry, it doesn't matter. Now let's assume you have your ADC connected to that same place with 0.1V noise, and your magnetometer connected to a place where there is 0.2V noise. Again, the power supply for each chip will have the same noise presented to it (because of decoupling), so separately, the chips won't see the noise. However, when you feed the signal from the magnetometer (Which has 0.2V of noise on top of it), to the ADC (whose ground has 0.1V of noise), the ADC will see 0.1V of noise on the magnetometer signal. The situation is just as bad when you fuse the planes together, because now the ground of the analog circuits will see the large currents from the digital circuits, and every analog circuit will see a different voltage for ground.
H: Is it a good idea to connect a line buffer to ground? Got a quick question. I'm dealing with 74VHC125BQ,115 and I won't be using one of the line drivers; the rest are connected to UART. So, I'm wondering what I should do with the last one. Is it a good idea to ground its input or should I just leave it floating. Appreciate the help AI: You should always tie unused inputs to a valid logic level. That could be tied to GND or to the VDD voltage rail. Never leave unused inputs floating in that it can cause excessive power dissipation in that IC package and introduce extra noise into the voltage / GND rails. It is common practice to use a pullup or pulldown on the unused inputs on unused gates. This makes it easy to use the gate if it is ever needed for a design re-work on the board. It gives an access point to connect to the gate and prevents having to cut an input loose from a voltage or ground connection.
H: SN74LS181: The comparison function, A=B, not operating I have six SN74LS181N-B circuits for arithmetic functions. I intend to utilise the A=B output, which is the comparison function. Sources have indicated that this function operates, if subtraction is acted, and the output word, (F), is 1111. However, this does not seem to occur; I tried both situations of equality and inequality to observe if the pin is inverted; I even tried when the whole output word, (F), is 1111. Additionally, I found that the Carry pins, (Cn, Cn+4), are inverted and thus suspect the output and input words are inverted. However, if I use normal averted inputs, it operates proficiently. Do I have to utilise external inverters? AI: It's difficult to be sure that we're interpreting everything correctly without you supplying a schematic diagram (a picture is worth a thousand words etc.). However your main question has a clear explanation: I intend to utilise the A=B output [...] From the datasheet: "The A = B output is open-collector so that it can be wire-AND connected to give a comparison for more than four bits." See this extract from the datasheet - the A = B output (which I've marked in red) is open-collector, whereas the other outputs are standard "totem pole": Therefore you can't measure a meaningful change of state between low & high, unless a suitable pull-up resistor is added between the A = B output and VCC. I think you are saying that the A = B output always remains low. That is expected. An open-collector output pin without a pull-up will always appear to be low (either it's actively driven low or it is undriven, which will also measure as low with a multimeter - read about open-collector outputs for more details) Try a 4.7k pull-up to VCC on the A = B pin and re-test. You should find that the A = B output is then low, unless you correctly configure the chip and supply equal input words, when it should become high to indicate equality.
H: Board to Board connection for RF and high-speed digital signals (I hope this is not regarded a "shopping or buying recommendation" question - in my opinion it is not). In order to split a larger project into more tractable and flexible sub-blocks I create two PCBs (motherboard/daughterboard). This is a very common setup in the digital world (PAM2-PAM4) and there are high-end connectors for high speed digital (Samtec, FMC). However, in my project I also need to "flawlessly" connect RF signals between the boards. In summary - these connections have to be made between the boards: ground plane, power (easily solved) high speed digital, ~100 MHz, ~50 I/Os (easily solved with Samtec connector) Analog clock, ~200 MHz 4x RF signals, up to 3 GHz For 3-4 I would use SMA connectors - if it would not be a daugher board. However, I would prefer a more solid connector - ideally just one for 1-4. I found RF board-to-board connectors but they tend to be single signals. Even if I could get an RF connector with multiple signals, physical alignment together with an FMC connector for the digital lines is painful. So I am really looking for a board-to-board connector solution that faithfully connects 1-4. If I can use high-speed digital connectors also for the RF signals - are there any issues or disadvantages using them compared to SMA, e.g. insertion loss, return loss/matching, isolation? Or stated differently: Why are these connector usually not used for carrying RF signals? AI: The same connectors that can handle multi-gigabit digital signals can handle gigahertz RF. In fact, gigabit digital signals are generally more challenging to handle, since they tend to have much wider bandwidth than RF signals. However, if you think you will find any connector with 0 insertion loss, whether for RF or digital, you will end up disappointed. Why are these connector usually not used for carrying RF signals? Because in general RF is a much smaller part of the overall electronics industry than digital is. Because most RF applications don't need dozens of signals in a single connector.
H: Why the control hazard is worse for deeper pipelines I'm studying compute rarchitecture and read that "control hazard are worse the deeper the pipeline is". But why is this the case? Is the branch deeper and therefore takes longer time to run? AI: If a control hazard isn't taken care of properly, the program is forced to wait until an instruction is finished before it can move on to the next instruction and thus performance is dramatically slowed down. This event usually occurs when we don't really know what the next instruction will be in the program. It wastes time, which is why we need pipelining so that we can do multiple things while we are waiting for an instruction to finish. From a textbook I had when I was in college studying computer architecture: "Suppose our laundry crew was given the happy task of cleaning the uniforms of a football team. Given how filthy the laundry is, we need to determine whether the detergent and water temperature setting we select are strong enough to get the uniforms clean but not so strong that the uniforms wear out sooner. In our laundry pipeline, we have to wait until the second stage to examine the dry uniform to see if we need to change the washer setup or not." But there are ways to avoid the issue: Stalling until the branch is finished Predict if a branch is taken Predict if a branch is not taken Delay the branch after an instruction
H: STM32F10x DMA a constant value does anyone know if it's possible to DMA a constant value using the STM32F103? I've scrutinized the datasheet but can't see anything obvious and nothing via Google either. Could do a single word in circular mode of course but it would be nice to avoid the bus access. AI: Yes this is possible. Just do not increment the source address in memory-to-memory or memory-to-peripheral mode. DMA_CCRx->MINC (bit 7)
H: Stacking UDN2981 ICs for more Amps Kind of a novice question but I wanted to check if I am right about something. I like the convenience of the UDN2981 source driver to provide switched power to for multiple loads. However, I would like to drive loads that requires about 1.5x higher amperage than the specified limit of the chip. Would it be OK to physically stack two of these chips on top of each other to double the current draw capacity that I am able to deliver? I have verified that it seems to work, but I am looking for some sage advice as to whether this is truly OK to do. AI: The output stage of the UDN2981 is bipolar, so you must always fear the negative temperature coefficient – if one of the chips heats up more than the other, it will hog more current and this results in a quick thermal runaway of the stack. So, two of them tightly stacked might be okay. Three – not, because the one in the middle inevitably gets much warmer than the outer ones. Putting balancing resistors between Vdd and Vs is possible (or on each output) but still seems a bad idea.
H: 240 V AC to 5 V DC power supply recommended fuse Admittedly this is quite a novice question but I just wanted to make sure before I start a potential fire as this project is destined for an enclosed space behind a gyproc wall... I picked up a Vigortronix 230 V AC to 5 V DC PSU to power a Pi Zero W and camera. I'd like to put an in-line fuse on the live wire and after looking at the data sheet I'm unsure what size of fuse I should be using. Which figures should one use to calculate this? Also, would it be good practise to place a capacitor on the 5 V output or unneccessary - if so, what size of cap? Lastly, would there be any other recommendations for using this type of power supply? Thanks in advance. AI: The fuse is to protect against excessive current, so you need to look at the current rating of the device. The datasheet says the module takes 70 mA continuous at 240 VAC input for 5 W output. You could use a 1 A mains fuse. It also says the inrush current could be up to 25 A, so you need a slow-blow (a.k.a. timed or time-delay) fuse. If the inrush current tends to blow the 1 A fuse, use a 3 A fuse. For maximum peace of mind, find a reputable fuse manufacturer and consult their datasheets with regard to the inrush current. Note that there are fake fuses available from online auction-type websites, so you might want to track down a reputable seller. Note that the wire used to connect it must be of at least the same current rating as the fuse so that in a fault condition the fuse will blow cleanly instead of the wire melting messily. With regard to a capacitor (or other filtering, e.g. an inductor), check if the output specifications meet the requirements for your circuit.
H: DC power supply - battery charging I want to charge Lead acid battery (12V, 55Ah) I have adjustable DC power supply (I can set voltage and limit amps). Do I need to put resistor in between power supply and battery? If so.. how much Ω ? AI: Resistor will not be required, you may limit the current to 1/10th of the battery Ah value, or at the most at 2/10th of the battery Ah value, above this it is strictly not recommended for a lead acid battery. Next, let it charge for 10 to 12 hours minimum, and then remove it from the supply, you will find it optimally charged.
H: Determining the voltage of a network branch [NOTE-This is a not a homework problem and i am not asking anyone to solve it] Recently I have been simulating some basic circuit analysis problems in a simulator to get a better feel of the topic.Today i came across this problem---- Now the problem that i faced is that i was unable to determine the voltage of the highlighted branch by virtually analyzing.But the voltage of the branch can be found out easily by K.C.L Law.Is there any shortcuts to this problem?Am i missing any concepts.I am really confused.Which voltage source will control the branch voltage?I simulated the circuit below is the observation-- 1)In the above picture the voltage of the branch equals to the voltage source A)Voltage of the branch doesn't get affected when B is increased.Why? So my two questions are------------------------- 1)Voltage of the branch doesn't get affected when B is increased.Why? 2)Do the voltage source A really controls the branch voltage?As shown by the simulator. 3)Am i missing any concepts?I can solve this problem by applying KCL but can the voltage of the branch be determined by virtually analyzing(According to the sim we can determine the voltage by finding out the voltage of A,is that observation reliable?)That info will be useless if the network completely changes so i need a concrete explanation. simulation with the ground and according to the first image--- AI: @Andyaka do look at the new image i posted Yes, that is what you should get (my simulation): - If you analyse it you will see that V2 branch takes zero current. All your other errors are because you had used the wrong voltage source for V1, had forgotten about adding a 0 volt reference point and you were using ohms instead of kohms for the middle resistor.
H: Why can a MOSFET be used as an amplifier? This is a fundamental question which I am struggling to answer after familiarizing myself with the MOSFET and analyzing different circuits containing MOSFETS. In terms of GDS channels, what enables a MOSFET to be used as an amplifier? I know that Transconductance relates the output current to input voltage. Voltage gain of a MOSFET is directly proportional to the transconductance and to the value of the drain resistor. Gradually increasing the positive gate-source voltage VGS, the field effect begins to enhance the channel regions conductivity and there becomes a point where the channel starts to to conduct. We can control how the MOSFET operates by “enhancing” its conductive channel between the source and drain regions. However, I am unable to form a logical analysis as to what really goes on in the Gate, Drain and Source channels to enable a MOSFET to be used as an amplifier. AI: Consider this random picture of a MOSFET characteristic I took off the internet: - This is the bare bones characteristic of a MOSFET used in a very simple circuit like this: - You set a gate-source voltage (\$V_{GS}\$) and plot what the drain current is for various values of \$V_{DS}\$. Now consider what happens if you put a resistor in series with the drain and used a fixed 40 volt power supply feeding the resistor and drain. If the MOSFET is fully off there will be no current through that drain resistor and you get point A (below). If the drain resistance was 10 ohms you would get 20 volts across it when 2 amps passed. This allows you to draw a load line on the first picture: - So, for this particular set-up with a 10 ohm drain resistor (see load line in red) and a \$V_{GS}\$ of 5 volts, the \$V_{DS}\$ would be about 23 volts and, for a \$V_{GS}\$ of 6 volts, \$V_{DS}\$ would be about 13 volts. Can you see that if you had an input signal that was a sinewave going between 5 volts (bottom of sine) and 6 volts (top of sine), the output would be also a sinewave changing between a trough of 13 volts and a peak of 23 volts. That is a signal voltage gain of 10. Ignoring DC offsets and just concentrating on the output signal, it has an RMS voltage of 3.536 volts and an RMS current of 0.7071 i.e. a power output of 2.5 watts. It's not an amazing performance but you have generated an output signal power of 2.5 watts by varying the input voltage at the gate by 1 volt p-p. The input signal power needed to do this is a few tens of microwatts. You have made a massive power gain and this is the important thing for such things as audio power amplifiers.
H: Driving a filament I have a filament that needs 1V, and is rated at 50mA at 1V. I would like to use 5V supply for it, and I am wondering, how to approach this. 50mA at 1V means that the resistance is 2k, as per Ohm's law.But it changes, does it not, as it heats up? 1.) I thought about adding a 8k resistor in series with it, so it acts as a voltage divider, giving 4V over 8k resistor, and 1 V over filament. However, that would be 20% efficient, as most power would be dissipated across 8k resistor. 2.) I thought about a buck converter, but I am not sure, if that is a good option My question is: What is the best way to go down from 5V to 1V in this case, when the load current is so small? The filament is in a numitron tube. AI: What to do really depends on your power budget and physical size of the circuit. The brute force approach is to linearly drop the 5 V to 1 V. You say the current is 50 mA, so that will take a total of 250 mW. 50 mW of that will heat the filament, and the other 200 mW are wasted as heat. If 200 mW wasted power and extra heat to get rid of isn't a big deal, then a 80 Ω or so ½ W resistor is all it takes. That is certainly cheap and simple. If this is battery operated, or a extra 200 mW of power usage and/or heat to get rid of matters, then use a switcher. Depending on how the filament is connected inside the tube and what the tube is used for, you might be able to drive the filament directly with 5 V pulses. You can easily switch fast enough so that the filament temperature effectively doesn't change over the on/off parts of each pulse period. You can therefore consider the filament a fixed resistor once it gets up to temperature. You say it is supposed to draw 50 mA at 1 V, so looks like 20 Ω when hot. Note that power into a resistor is proportional to the square of the voltage across it. A steady 5 V will drive the same resistor with 25 times more power than 1 V. The duty cycle with unfiltered 5 V pulses should therefore be 1/25. For example, 1 µs at 5 V and 24 µs off repeatedly should drive the filament nicely. At that 40 kHz switching speed, the filament temperature will be quite constant. Another thing to consider is driving multiple filaments in series if you are using multiple tubes anyway. This may be inappropriate if the filament is also the cathode. It depends on details of the tube and how you use it. Perhaps having the cathodes of 5 tubes varying over the 0-5 V range doesn't matter in your application.