text
stringlengths
83
79.5k
H: How does the range of bluetooth increases by adding an antenna? I have seen too many tutorials on how to increase HC-06 bluetooth module range. They all involve cutting the on-board antenna and adding an external one (a piece of wire or a 2.4GHz antenna). I was wondering how this affects the bleutooth module, for example, the power consumption, the frequency. And the relation between the length of the wire and the transmission range. Can anybody help? Thanks in advance, AI: Looking at an HC-06 module: what seems "odd" to me is the shape of the antenna. For 2.5 GHz I do not expect that a meandering shape like this (the gold colored part) is any good. My guess is that it is "designed" by someone who doesn't have a clue about antennas at 2.5 GHz. Other 2.5 GHz antennas are often F-shaped or T-shaped. I saw another meander shape: but note how is has two connection points, part of it is F-shaped. My conclusion is that not much thought was put into the HC-06 module's antenna and that probably makes it a very poor antenna. So there's a lot of room for improvement, anything even remotely better will improve performance. Power consumption wise you would not notice the difference. The actual RF power is quite low and insignificant compared to what the rest of the circuit (modem etc.) uses. Also with a bad antenna most of the RF power reflects back into the chip and is lost. A good antenna will radiate that RF power making things more efficient.
H: Voltage buffer with ideal Op-Amp (positive feedback) - Instability I know that the circuits which employ negative feedback are stable, whereas those which employ positive feedback are not stable. How can I prove that a voltage buffer (ideal op-amp) which uses positive feedback is not stable? I know only the dual case ( \$ A=+\infty \$ because the op-amp is ideal): $$v_u=A \, v_{in}=A (v^+ - v^-) = A \, v_i - A \, v_u$$ $$v_u = \frac{A}{1+A} v_i = v_i$$ If I consider: $$v_u=A \, v_{in}=A (v^+ - v^-) = A \, v_u - A \, v_i$$ $$v_u = -\frac{A}{1-A} v_i = v_i$$ The last result should be wrong. Thank you so much for your time. AI: I know that the circuits which employ negative feedback are stable Any feedback system is potentially unstable. What might seem like negative feedback at low frequencies can easily turn to positive feedback at higher frequencies. whereas those which employ positive feedback are not stable. Also, incorrect. A comparotor that uses hysteresis can be regarded as stable when one or the other threshold has been exceeded. How can I prove that a voltage buffer (ideal op-amp) which uses positive feedback is not stable? You can derive a stable condition for an op-amp with positive feedback but, if you introduce the slightest bit of noise as an influence, the circuit becomes a massive noise amplifier. If you then model input offset voltage and bias currents and how those change with temperature you get an unpredictable circuit.
H: Large differences between datasheets for the same part number I'm working with the MIC4605-2 half-bridge MOSFET gate driver. The chip was originally made by Micrel. Microchip has acquired Micrel and took over their product line, including the MIC4605-2. There's now 2 datasheets for the same part: The original one from Micrel. The newer one from Microchip. There's some large differences between them regarding voltage on the HS node. Micrel: Microchip: Going from -1V continuous and -5 repetitive transient in the Micrel version to -0.3V continuous and -0.7V repetitive transient for Microchip. Those differences have serious design implications. How can there be such a big difference for the same part? Which version should be trusted? AI: You should trust both versions of the datasheet. If you are 100% sure you have the older part, it might have a "Micrel" logo on it, then you should trust the Micrel datasheet. Since the Microchip datasheet is more restrictive, it should be safe to rely on that datasheet for both parts. So if you do not know which version of the chip you have, use the newer datasheet. The -1 and -5 V ratings mean that Micrel did something in their design with the ESD protection on the HS pin, allowing it to get a higher rating. The Microchip rating is more "normal" in that sense. My guess is that Microchip decided that the -1 V and -5 V rating is not needed and derated the restrictions on the HS pin. You can rest assured that semiconductor manufacturers are very careful with such changes. They don't want to annoy the customers (other companies) buying this chip to go to the competitor because of changes like this. So you may think it is a big change while in reality and practice it probably is not.
H: Breakout board for EFM8BB1 I am making a breakout board for Silabs EFM8BB1 (Busybee). I have made the schematic but am unsure about the C2 Debug Interface. Can someone guide me please? AI: You seem to be following the configuration in the application note https://www.silabs.com/documents/public/application-notes/AN124.pdf that allows you to share the C2 debug pins with RESET and GPIO. That app note is missing the debugger part of the connection, if you see the pinout in the Debug adapter and the eval boards. https://www.silabs.com/community/mcu/8-bit/knowledge-base.entry.html/2017/03/22/using_the_usb_debug-XeTq (Note that I didnt put the red Highlights as those are the simple connection without pin share) Pins 5 and 6 of the table are the pin share connections, so they arent actually the same pins as your schematic shows. Following your original schematic: The node after R1 (the second D_C2CKC label) should go to a header for pin5 C2CKC_share The node after R2 (the second D_C2D label)should go to a header for pin6 C2D_share So adding the pins to your J3 header: 1 Empty 2 D-GND 3 D-C2D 4 D-C2CK 5 D-C2CK_SHARE 6 D-C2D_SHARE
H: Problems with DC analysis of a PMOS circuit I'm beginning with electronics and I've picked up the book from Donald. A. Neamen - Microelectronics. I'm stuck at a simple example of DC analysis for this PMOS circuit. simulate this circuit – Schematic created using CircuitLab I have to find: $$I_D, V_{SG}, V_{SD}$$ Parameters given are: $$K_P = 120uA/V^2, V_{TP} = -0.3V$$ Correct results are: $$V_{SG} = 1.631V, I_D = 0.2126mA, V_{SD} = 3.295V$$ This is how my calculations go: $$V_G = (R_2/(R_1+R_2))*(V_+ - V_-)+V_- = 0.33V$$ I took the assumption that the transistor is in saturation, therefore: $$I_D = K_P*(V_{SG}+V_{TP})^2$$ Now I used the upper right loop: $$V_+ - V_G = R_S*I_D + V_{SG}$$ After these two equations, I used a handheld computer to solve these two equations for \$V_{SG}\$ and got two results and none of them is right... $$V_{SG1} = -2.03V, V_{SG2} = 1.24V$$ What am I doing wrong here? The correct \$V_{SG}\$ is ~1.6V. Thank you! EDIT: Here are the results from the book. AI: Something is wrong here. The range between the two supplies in 4.4V, but with a 212.6 μA current, the drop across the source and drain resistors is: $$212.6\mathrm{\mu A} \cdot (6000\mathrm{\Omega} + 42000\mathrm{\Omega}) = 10.2\mathrm{V}$$ That's not even including \$V_{SD}\$. Plugging your circuit into CircuitLab gives: $$V_{SG} = 1.43\mathrm{V}, V_{SD} = 887.5\mathrm{mV}, I_D = 73.18\mathrm{\mu A}$$ which is not in saturation and is not anything close to the book's answer. I suspect this is an error introduced when the textbook was updated for a new edition. Unfortunately, there doesn't seem to be any errata listed on the publisher's web site.
H: Driving Stepper Motors I am currently working on a project that involves 6x NEMA 23 stepper motors, rated at 2.8A - 2.5V each. I have been doing extensive research on what drivers and power supply I should use. From my knowledge, I have decided to use a 10A, 12V Switching Power Supply, and to use the L298N Stepper motor drivers. I just require confirmation that the parts I have chosen will all work properly together - any guidance would be appreciated. The data sheet for the Motor: RS Pro Hybrid, Permanent Magnet Stepper Motor 0.9°, 1.26nm, 2.5 V, 2.8 A, 4 Wires The data sheet for the Driver: L298 DUAL FULL-BRIDGE DRIVER AI: Your choices will certainly work, but only if you are careful. If you don't pay attention to the details, you will cause the power supply to go into current limit, with bad consequences for system performance. First, your choice of driver is fine. With a 4A/phase capability, you should have no problems on that count. Now, about the motors. The first thing you need to realize is that the 2.8 A limit applies to each winding, so it's perfectly possible to draw 5.6 A in normal operation. Since you will be running as many as 3 motors at a time, your current draw will be as high as 16.8 amps. Is this a problem? Not necessarily. The question comes down to how much torque (and therefore current) you need. You must have noticed that you cannot apply 12 volts to the motor for any length of time, since this will provide 12 V/0.9 ohms per phase, or 13.2 amps once the inductive effects has settled out. Instead, you'll need to create (or buy, they're cheap) a constant-current driver. Commercial units use PWM controlled by feedback from a current sense resistor (which you'll notice on the L298 data sheet). In the process of building or using such a circuit, you can set the current to pretty much any level you want. So, if you set your current levels to 1.5 A per phase, you'll draw 3 amps per motor, and 3 motors will only draw 9 amps. Of course, this will only give you half the torque you expect. How much do you need?
H: Can someone explain this op amp "tone control"? Its supposed to be part of a guitar effect pedal, why is there positive feedback besides the negative feedback? I was reading about op amps and from what i understand positive feedback is very rarely used. How would this affect the signal? AI: Quite often the way to analyse these sorts of things is to ask yourself what happens at the extremes (of control position and frequency). The nice thing about this approach is that at low frequency (whatever that means) caps are approximately open circuits and at high frequency they are approximately short circuits. Ok the 1k and 220nF at the left are a more or less fixed passive lowpass, so forget them, not that interesting. Now with the pot slider all the way to the left: It just adds 220nF in series with 220 ohms across the existing 220nF so the thing gets a bit more lowpassy (With a bit of a shelf)... With the slider all the way to the right we get something more interesting: Now at low frequency we model the cap as open circuit so by the usual rules of the opamp (The output is driven such that the two inputs are pretty much equal) we can clearly see that we have a gain of 1 from the opamp stage. At high frequency however the caps appear short circuit (ignoring that passive input LPF for the moment!), so the opamp now has a gain of about 1 + (1000/220) = ~5.5 times, we can ignore the 20K pot because it is swamped by the 220R resistor, so we have a variable treble boost of about 14dB preceded by a passive lowpass filter. As to where the action happens, just figure the timeconstants (1K & 220n, 220R & 220n). It is amazing how often this sort of grossly simplified analysis is all you really need, and it is a ton quicker then disappearing into a mess of S plane bullshit. Hope this helps.
H: Level shifter for multiple voltage levels I found basic level shifters for two different voltages but I have 3 different voltages on my I2C bus. I just wanted to verify if the schematic below will work correctly. I think it will but I am not sure if the different levels will influence the working. Would appreciate a confirmation. (or if it does not work a tip how to solve this) AI: This particular circuit ONLY works if the signals are open-collector or open-drain, other wise you have this.... simulate this circuit – Schematic created using CircuitLab Since you are using the circuit for I2C, that should not be an issue here provided the GPIOs are initially configured correctly at all three points.
H: does the battery determines the amount of current flowing in the circuit? suppose a 9v battery is connected to a load which draws 2 amps of current. so how does the battery determines that load requires this much current ? I mean if the battery throws about 3 amps, then it would just blow the load, so how the battery just supplies 2 amps and not just any other value? AI: Does the battery determine the amount of current flowing in the circuit? Well... yes and no. The battery will try and give the load whatever it asks for not the other way round. This is true for any voltage source not just batteries (current sources will try and push a set current through a circuit but voltage sources will just sit there and do as they're told). If the load wants 16.73 micro amps, that's what it will get, if it wants 500mA, that's what it will get. All normal batteries along with almost all power supplies you will normally encounter (except for LED drivers) try only to keep their output at a constant voltage (even if you're drawing 0A, although some ancient PC power supplies go a bit funny with no load). The problems start when heavy loads are connected to light-duty sources, just like how I'll struggle to lift an 80kg railway sleeper, small power supplies will struggle to maintain their designed output voltage under heavy load. Everything that isn't superconducting has resistance, everything. Small things generally have more, big currents and high resistances turn into big voltage drops (and lots of internal heating). So if you try and pull lots of current from a small battery, you might find that its output voltage drops right down and keeps dropping until either the load turns off or it stops trying to draw full current. Electronics are not that tough, keep pushing them and eventually they just give up trying (it'll probably get pretty warm too).
H: Video scaling vs Display scaling I am going to start a project in FPGA and "graphical screen" scaling. I am still in the "reading-the-material-phase". Are there any differences between video-scaling and display(e.g. Windows Desktop)-scaling? Video scaling is using Bi-linear,Bi-cubic etc. IEEE got a lot of articles related to hardware implementation of video-scaling. But I have not found anything related to display (desktop) scaling. If I have multiple displays and want to share the same computer-desktop, but the displays are using different ports (VGA, HDMI, display-port) and the displays are of different size. What kind of scaling algorithm are they(Microsoft,Apple etc.) using as standard to adapt an application for each display/screen When I want to resize a screen from e.g 720x480 —> 1280x720, what type of standard scaling algorithms are used?(bilinear, bi-cubic) AI: Scaling can be done by the application, monitor or GPU driver as permitted by the operating system. http://tanalin.com/en/articles/lossless-scaling/
H: ESD and Electrolytics Consider the simple circuit schematic below: 1) Would a high voltage (say 1kV-25kV) human body ESD discharge event (1ns rise time, 100ns total) at OUT have a negative effect (damage or destroy) the electrolytic capacitor when in the polarity as shown? 2) Do ESD events in general cause issues when using electrolytics with the polarity such that the + plate is facing the high voltage spike (positive polarity)? Or does the spike need to be clamped approximately to the capacitor's rated voltage for proper reliability? I haven't been able to find much information about this. AI: During ESD testing, pulses of both positive and negative polarity are used. So polarity matters very little. But the charge reservoir used for ESD testing is only 100pF (or 200pF depending on the model). That is too small to cause much of a voltage change on a 100uF cap. Basically it is a 1,000,000 : 1 voltage divider. So 10kV discharge will only cause 10mV voltage change on the cap. If V1 is actually an audio output DAC, then V1 might get damaged. But now we need to consider if the electrolytic cap can actually be modeled as a cap when it comes to an ESD pulse. There may be substantial parasitic inductance, so it may even help protect V1 for all I know. It all gets messy. That is why people tend to just slap down an ESD diode and move on. Series resistors and ferrites also help attenuate the ESD pulse, and are seldom, if ever damaged by the pulse. So my philosophy has always been to put resistors or ferrites close to the ingress point of the ESD pulse, and put shunt protection close to the IC or transistor being protected. The more series elements you put between incoming pulse and silicon, the better. For signals with no high frequency content, a simple RC filter may provide great protection from ESD. Getting back to your circuit. If you wanted to, you could move R2 closer to "OUT," and put a small ESD diode (TVS diode) in shunt between R2 and C1. This would be to protect V1. Since you haven't told us what V1 is, I don't know whether it needs protection or not. When trying to filter high frequencies, we tend to look at ferrite beads. But a 10k SMT resistor has higher impedance from DC to microwave than any ferrite bead, and they are cheaper, too.
H: LED bulb redesign As a follow up to this question I decided that it is time to see how should I utilize failed LED bulbs. There're 5 of them now, however in different condition - one does not start at all, some start flickering 15 times per second, others start flickering 2 times per second. But those flickering, in a minute or two, eventually start lighting continuously. Obviously the issue was overheating, and as people responding to above mentioned question proposed, the cause is dried up caps, however I did not have a chance to test it so far. Now background. My chandelier has 5 lamps, each looking down without the ventilation holes in their bodies, thus the heat produced by the bulbs is having issues getting out (the cause of the bulb failures I guess). As I have 5 bulbs now, I can try converting whole chandelier to something else than bulbs powered by the power from the mains. What I want to do. The bulbs are having LED and this board in their assembly. I can not find suitable 2.2 uF/400 V caps to fit into the board (original ones are 6x12 size, and I suspect they are fake because all caps of this specification start with 8x12) and into inside the body of the bulb, thus I have an idea to remove this board out of the bulbs, and put these boards into the location of chandelier where AC wiring is done, and wire lamp sockets with output of these boards rather than mains AC. Thus these buck converters will be located in some central point, and LEDs will be located in the lamp sockets. However I see several potential issues here, but can not evaluate them. As I understand bulb's controller is having thermal protection, and being within same assembly, it can sense temperature of converter/nearby resistors as well as temperature of LEDs. If I separate LEDs and controller, latter will not sense temperature of LEDs any more. Could there be an issue? Long wires (~50 cm) from buck converter output to the LEDs. I can not find anything in the datasheet for the chip stating that load must be as close to the converter. Can it be the issue in operation of the converter? Update: I caught another issue with my design. Buck converters used in the Gauss LED bulbs I used output 85 V with no load condition - in my case when bulb with LEDs is removed from the socket - and output capacitor rated 50 V heats up and will most probably explode. Thus I either find a way to limit voltage with no load to ~40 V, or scrap my design completely :) Update: here're results of the project Board from the bottom, power routing Board from the top And within chandelier assembly Bulbs (with LEDs only) are still heating, but I would say they are ~80 C. Central hub heats a little, but still can be touched by the hand. The design still needs to be tested for durability though. AI: No issue. At worse the LEDs will overheat and die prematurely. But it shouldn't happen if proper voltage and current regulation is applied by the converters/supplies. Long wire: Increase gauge (cross section), e.g. 1mm or 1.5mm. Thin wires will heat. Thicker wire won't. Solder them. HTH
H: MPU-9150 (IMU) AD0 address change I have an MPU-9150 which uses I2C address 0x68 by default, but I'd like it to use address 0x69. This is possible by desoldering the 3-way AD0 "Jumper" on the breakout board. By default there are 3 solder pads and pads 2+3 are connected, so that pin AD0 is defaulting to "ground". So as I understand it, I need to "desolder" the connection from pads 2+3 and then I can connect VCC to pin AD0 which will switch the I2C address to 0x69. I tried desoldering, but it looks like this won't "break" the connection even if I get rid of all the solder. It looks like it'll still be connected due to the little "circle" and the edges around it (see images). I haven't done such tiny connections before yet either - so any tips on how to best proceed ? Also, I'd like to understand better how this 3-way jumper works. So assuming pads 2+3 are desoldered, can I "manually regulate" which address is going to be used by putting "high" or "low" on pin AD0 (or is it then going to be fixed to 0x69)? ... and not connecting anything to the pin now would probably be "bad"? What happens if I'd solder AD0 jumper pads 1+2 together? Will it then always be using the higher I2C address (0x69)? For future reference: Following the suggestions of SamGibson, cutting the connection between pads 2+3 using a hobbyknife + loupe - and then connecting 3.3V to AD0, it now indeed reports I2C address 0x69, see image : AI: (For consistency with your numbering, I'll count the 3 pads of the AD0 jumper starting with pad 1 on the left-hand side.) as I understand it I need to "desolder" the connection from 2+3 and then I can connect VCC to pin AD0 which will switch the address to 0x69. I tried desoldering, but it looks like this won't "break" the connection even if I get rid of all the solder, it looks like it'll be still connected due to the little "circle" and the edges around it Agreed. That breakout board for the MPU9150 seems to have a design defect and some kind of square-ish through-hole pad is bridging the right-hand pair (pads 2-3) of the 3-pad AD0 jumper. I've marked it in green on this enlargement from your image, for readers to see what I'm referring to: People with the necessary experience, tools & perhaps some magnification, could bypass those pads, cut the PCB track leading from pad 2 to the AD0 pin on the device, and connect the track however is required (logic high or low). If you want to use those 3 pads of the AD0 jumper, you'll need to investigate the purpose of that square-ish through-hole pad - does it have any connections, top or bottom, except to AD0 jumper pads 2-3? I expect the square-ish through-hole pad has a connection to Gnd. If so, you'll need to use a sharp knife (scalpel) to cut away the copper from that square-ish pad which connects it to pad 2 on the AD0 jumper, leaving it only connected to pad 3. What happens if I'd solder AD0-jumper 1+2 together? Do not connect pads 1-2 of the AD0 jumper until pads 2-3 are definitely disconnected, otherwise you will short pad 1 (VLOGIC) to pad 3 (Gnd). You should really get support for this PCB problem from the vendor of the breakout board. Unfortunately a low-cost supplier may not care, and in that case, you would have the challenge of solving this yourself. After the incorrect copper "bridge" has been removed between pads 2-3, then the following options are possible: If you want to use the AD0 pin on the 0.1" header to select the I2C address, then the AD0 solder jumper must not connect the centre pad (pad 2) to either of the other pads. Alternatively, if you want to fix the I2C address on the board without using an external connection to the AD0 pin on the 0.1" header, then: connecting AD0 solder jumper pads 1+2 (pad 3 unconnected) sets AD0 high (I2C address 0x69), or connecting AD0 solder jumper pads 2+3 (pad 1 unconnected) sets AD0 low (I2C address 0x68)
H: Is there available differential impedance calculation for this stripline configuration? I am trying to calculate the differential impedance of a differential pair in a 4-layer PCB as follows: Layer: Low speed signal Layer: Power plane Layer: Differential pair Layer: Ground plane I am currently using Saturn PCB Toolkit, may I know which settings should I consider: Edge Cpld Ext (Microstrip) Edge Cpld Int Sym (Edge-Coupled Symmetrical Stripline) Edge Cpld Int Asym (Edge-Coupled Asymmetrical Stripline) Edge Cpld Embed Broad Cpld Shld Broad Cpld NShld Thank you. SF AI: If your only high speed signals are on layer 3 then you choose stripline. There is no need stated for an asymetric layer space. Here is a web tool. https://www.eeweb.com/tools/symmetric-stripline-impedance https://www.eeweb.com/tools/edge-coupled-stripline-impedance https://www.eeweb.com/tools/edge-coupled-microstrip-impedance https://www.eeweb.com/tools/broadside-coupled-stripline-impedance
H: Damages to ICs done when cleaning boards with IPA I have designed a PCB and soldered some SMD components on to it. When I check it, it doesn't seem to work. It was my very first SMD soldering experience and I think I used too much hot air. While investigating the issue, I found out a moisture level factor in ICs. So I read this article ICs with humidity or moisture sensitivity - bake recommendations and found out that water vapors can reside damaging the ICs from inside. I had the ICs weeks before soldering and they were also the very first time I experienced a TQFP44 brand new IC (PIC24EP256GP204). I was careful enough not to touch the ICs by hand but to use an anti-static tweezer. Now all that is done and the ICs not working, I found out about this MSL(Moisture sensitive level) factor. My problem is, when we're cleaning the boards with IPA, prior to soldering, does that affect the MSL as IPA could go inside the IC package damaging it when heat is applied? (Popcorn effect?) Thank you very much in advance! AI: I ran a web search using keywords like "IPA moisture issues" or "IPA moisture reflow", etc. and came across various blogs, company websites, etc. that touch on the use of IPA to clean PCBs prior to (and after) board assembly. From the articles I read, IPA isn't that great at removing nonpolar residues (including various oils, grease, and other hydrocarbon residues), so its usefulness as a board cleaning agent prior to board assembly is somewhat questionable. Other articles explain that PCBs, after being cleaned (e.g., with IPA or detergents), should be baked to remove residual water moisture prior to board assembly. FWIW, Link Hamson's website (linkhamson.com) currently has a page titled Moisture Sensitive Device Handling with a subsection titled "COMPONENT BAKING OVENS" that identifies standard IPC-1601 Printed Board Handling and Storage Guidelines as well as some recommended bake times and temperatures for PCBs prior to board assembly. An article published by Michael Watkins of Chemtronics titled IPA as Universal Cleaner: Advantages & Disadvantages discusses IPA's hygroscopic properties, and it mentions how IPA exposed to air absorbs moisture until it reaches equilibrium at 65% IPA and 35% water. Yikes! Furthermore, after the IPA evaporates the water stays behind on the board. In some cases there is sufficient water residue to cause serious problems both before and after board assembly. For example, after cleaning an assembled board with IPA, water residue can get trapped between the leads/lands of fine pitch packages and can stay trapped there for days if it's not removed by baking. This water residue can cause corrosion (imagine packaging the board while it's still wet) and short-circuiting (powering up the board while it's wet). Various articles mention that IPA isn't particularly good at cleaning some fluxes used in soldering processes, and IPA can indeed be absorbed by and damage plastics, contrary to popular opinion. Mike Jones, V.P. Micro Care, in his response to a question titled Cleaning an assembled board with IPA states that IPA saturates at flux concentrations of around 2%, so LOTS of IPA and scrubbing is required, and IPA has a tendency to smear the flux residue around the board, not to mention flow underneath components and behind fine-pitch leads where it cannot be reached and absorbed with TechWipes, for example. With regard to handling static-sensitive parts. Handling the parts with ESD tweasers is not sufficient. The entire workspace needs to be designed for ESD assembly, including—at a bare minimum—a properly grounded ESD worksurface/mat, and you wearing an ESD wristband (or ESD jacket/smock) that is connected to the ESD mat. ESD-sensitive parts must never be removed from their ESD protective packaging or handled except at the ESD workstation. Boards containing ESD-sensitive parts must only be handled at an ESD workstation, and must be stored inside closed ESD-protective packaging (e.g., inside an anti-static re-sealable bag) when being stored or transported. The soldering equipment—handpieces, hot air reflow, etc.—must be ESD qualified also. CMOS components like microcontrollers are VERY, VERY, VERY susceptible to ESD damage. Your clothes (e.g., poly-based fabrics) sliding around on your body as you sit in a chair can easily generate static voltages on your body and clothing of sufficient magnitude (through the triboelectric effect) to destroy CMOS devices, especially in dry climates. Hope this helps.
H: Locking header jumper I want to implement a 3-header jumper setup in order to switch a microcontroller between two video modes on a PCB. I don't want to use just a standard switch as it doesn't seem appropriate for a change which will happen rarely if ever, so I wanted to go with a jumper instead such as this one. However, I was wondering whether there exist any kind of nicer, locking jumpers which would require squeezing tabs or some such in order to remove and replace them. If not, are there better solutions out there? Or should I stop mincing and just throw a tri-pole switch on there and call it a day? Thank you! AI: I have seen poor quality jumpers that almost fall off and also seen poor quality dip switches as well as excellent quality in both. But these are the two best choices. Gold plating gives best protection from oxidation. https://www.digikey.com/products/en/switches/dip-switches/194
H: Norton equivalent with a single voltage source I'm trying to find the Norton equivalent with respect to the load resistor RL for the DC circuit below. Each of the resistors have an equal resistance R. Removing the load resistor, I am having a lot of trouble figuring out which resistors are in series and parallel with respect to the load. It is very confusing with all of the different connections. Does anyone have any tips to simplify this circuit? AI: Every time I see a question like this and decide to provide an answer I think I'm going to start with the following story lead. (The schematic re-write follows.) One of the better ways to try and understand a circuit that at first appears to be confusing is to redraw it. There are some rules you can follow that will help get a leg-up on learning that process. But there are also some added personal skills that gradually develop over time, too. I first learned these rules in 1980, taking a Tektronix class that was offered only to its employees. This class was meant to teach electronics drafting to people who were not electronics engineers, but instead would be trained sufficiently to help draft schematics for their manuals. The nice thing about the rules is that you don't have to be an expert to follow them. And that if you follow them, even blindly almost, that the resulting schematics really are easier to figure out. The rules are: Arrange the schematic so that conventional current appears to flow from the top towards the bottom of the schematic sheet. I like to imagine this as a kind of curtain (if you prefer a more static concept) or waterfall (if you prefer a more dynamic concept) of charges moving from the top edge down to the bottom edge. This is a kind of flow of energy that doesn't do any useful work by itself, but provides the environment for useful work to get done. Arrange the schematic so that signals of interest flow from the left side of the schematic to the right side. Inputs will then generally be on the left, outputs generally will be on the right. Do not "bus" power around. In short, if a lead of a component goes to ground or some other voltage rail, do not use a wire to connect it to other component leads that also go to the same rail/ground. Instead, simply show a node name like "Vcc" and stop. Busing power around on a schematic is almost guaranteed to make the schematic less understandable, not more. (There are times when professionals need to communicate something unique about a voltage rail bus to other professionals. So there are exceptions at times to this rule. But when trying to understand a confusing schematic, the situation isn't that one and such an argument "by professionals, to professionals" still fails here. So just don't do it.) This one takes a moment to grasp fully. There is a strong tendency to want to show all of the wires that are involved in soldering up a circuit. Resist that tendency. The idea here is that wires needed to make a circuit can be distracting. And while they may be needed to make the circuit work, they do NOT help you understand the circuit. In fact, they do the exact opposite. So remove such wires and just show connections to the rails and stop. Try to organize the schematic around cohesion. It is almost always possible to "tease apart" a schematic so that there are knots of components that are tightly connected, each to another, separated then by only a few wires going to other knots. If you can find these, emphasize them by isolating the knots and focusing on drawing each one in some meaningful way, first. Don't even think about the whole schematic. Just focus on getting each cohesive section "looking right" by itself. Then add in the spare wiring or few components separating these "natural divisions" in the schematic. This will often tend to almost magically find distinct functions that are easier to understand, which then "communicate" with each other via relatively easier to understand connections between them. Use the above rules and rewrite the schematic: simulate this circuit – Schematic created using CircuitLab I didn't do a single thing here except a very basic and simple re-write of it using the rules I laid out above. Looks pretty easy now, doesn't it? It's trivial to go from the above and construct the two (identical) Thevenin equivalents. (Which are equally trivial to convert into Norton, if you prefer.) Can you handle it from here? What happens if you short \$R_L\$? Does any current flow in the shorted wire, now? What happens if you remove \$R_L\$? What is the voltage between the newly opened nodes?
H: Why is the capacitance of a short infinite? The voltage across a short is 0. It also does not store any charge. So the capacitance of a short is 0/0. However, I am told the capacitance of a short is infinite. How can this be? AI: The voltage across a short circuit is zero, regardless of current. There are three components this could be modelled as, which also have zero AC voltage across them regardless of AC current, they are a) A resistor with zero resistance b) A capacitor with infinite capacitance c) An inductor with zero inductance However, these components aren't equivalent. At DC, a capacitor can have a steady voltage across it, storing energy, and able to deliver that energy into a load. At DC, an inductor can have a steady current through it, storing energy, and able to deliver that energy into a load. Obviously therefore, a short circuit only is a zero ohm resistance, as it doesn't store energy. However, if you are doing an AC analysis, and have a large value decoupling capacitor, it's often convenient to model it as 'an AC short circuit', as its series impedance will be very small with respect to the surrounding components.
H: What are some simple ways to reduce Arduino power usage? For a quick and dirty project, what are some of the simplest ways to reduce the power use of an Arduino? Imagine the following setup: A typical Arduino Nano, hooked up to a pair of DS18B20s and one of the mini OLED screens. Power comes from a cellphone battery hooked to a TP4056 board, which then runs to a small 0.9V >> 5V boost board, both common eBay items. The idea is to make a basic platform, to which additional functionality will be slowly added, such as a wireless link and data-logging. For now, though, the situation is as described above. The obvious start would no doubt be killing off unnecessary LEDS , or at least, tacking on extra resistance to make them fainter. Maybe finding how to reduce the brightness of the OLED screen would also be an obvious step, and that green LED next to the ON switch isn't necessary... But beyond this, what simple steps could be done in coding or hardware load-out could help get more battery time? (This question relates to a specific level of embedded system enthusiast, namely, the intermediate level, someone who isn't yet confident enough to dive into the depths of the AVR assembly language, but rather someone who already grasps the basics of how electronics and embedded devices are put together in a system (In other words, myself..). There are differences compared to the How can I get my atmega328 to run for a year on batteries? question, and more specific to the actions I practically can take.) AI: My first step would be to identify what is using most of the power/current and address that. I often see these questions about reducing power consumption / increasing battery life on this site they often mention the general solution you already list and which are listed in other answers. For example I agree that reducing the supply voltage of a microcontroller reduces power consumption. However, if the uC is mostly in sleep mode and only active 1% of the time then reducing the consumed power is only of relevance if the uC takes a significant (for example more than 20%) of the total power budget. If for example your temperature sensors are on continuously at 1 mA each that's 2 mA total at 100%. Compare that to a uC being active 1% of the time at 10 mA gives an average of 1% * 10 mA = 0.1 mA so 20 times less. So the conclusion there would be to duty cycle the temperature sensors. Make the uC switch them on/off (or their supply, perhaps you can simply supply the Vdd of the temp. sensors from an I/O pin on the uC). Even if the temp sensors are only stable after having a supply voltage for 5 seconds that would still help significantly if you do a temperature measurement once per minute. I usually make a table with time active (in %, so basically that is the duty cycle), current consumption and the effective average current (which is simply the product of those two). That helps me identify where the current/power is going and that tells me how I can improve it. Concerning the step-up converter: you might not need it if all components can also run on 3.5 - 4.2 V. The ATmega chip can, some can even work at 1.8 V (you might have to change the "Brown Out" voltage in the fuse settings though). Some step up converters have a low quiescent current (current drain when the current at 5 V is zero) but not all do. Most circuit designers like to have a stable supply voltage, say 3.3 V. However, most chips actually don't care! As long as it is in their usable range. For high accuracy / low noise things might be different of course. My point: you don't always need a stable/regulated supply voltage. Removing that LDO / step up converter can save a bit of current.
H: Any rules of thumb to fix this non-functioning board I just made? I just reflowed this PCB. It is a 20 x 16 mm PCB with a Nordic nRF52832 Bluetooth Low Energy (BLE) SoC IC. I powered it on and the regulator, mounted on the back, got very hot. I measured the Vcc and GND outputs of the 3 V regulator and read 50 ohms. That is wrong. I checked all my capacitors, since I suspected it was something connecting Vcc and GND, and they seem fine. I found a solder bridge on the nRF chip and fixed it. I would expect to see a short if there were problems, but not 50 ohms. I have no resistors between Vcc and GND that could cause this. Where would 50 ohms come from? I accidentally connected power to the regulator backwards at first, maybe something is wrecked. I took the suggestions here and then built another one. This one I could program and blink the LED. I was helped by your comments. Here is a photo of it working. The photo caught when the LED was on. AI: Board getting hot is a clear sign of potential permanent damage. Even if that solder bridge was the only issue, powering the IC while this bridge was present had a good chance of damaging it irreversibly (sometimes ICs survive such abuse, but this rarely happens when you count on it). Having some resistance with a defective IC is to be expected. If there was a 0 Ohm short between VCC and GND, your board wouldn't get hot in the first place. Also, resistance value by itself is not very helpful: 50 Ohm could be 60mA @ 3V (hardly enough to get hot) or 20mA @ 1V (which could become 1A when you apply 3V). Do a thorough visual inspection on the next board before you power it on.
H: What are the benefits of this type of JFET biasing The next schemtaic I was found as an input stage of ultra-linear phono amplifier. What are the benefits of that type of biasing? I don't see that 12V zener has 12V voltage drop on it (only 4.2V) and in this situation how I can predict the Q-point of JFET? What JFET's may be used with such schematic? I take JFET 2n4391 and the voltage gain of the stage drops dramatically. AI: The idea of the circuit is that the Vds of the JFET is kept constant at about 12 V (more accurately: 12 V - Vbe_q2 = 11.3 V). Keeping the Vds constant eliminates any influence Vds has on the voltage-to-current transfer (Id / Vgs) of the JFET. If you do not have 12 V at the cathode of the zener D1 then the circuit is not biased correctly. Then most of the current flowing through R2 goes into the base of Q2 and then to Q1. Solution: lower the value of R2. D1 is a 1 W zener diode so it can handle 1 W /12 V = 83 mA. Let's say we use 20 mA through R2: 20 V - 12 V = 8 V, 8 V / 20 mA = 400 ohms. Hmm, that's a lot less than the 4.7 kohm you have. What JFETs you can use depends on the JFET's properties. Look in the datasheet what the Id will be when Vgs = 0 as that is how the JFET is biased here. I expect that you will also have to lower the value of R1 if you use a JFET with a Vgs=0 current of more than 8V / 2.4 kohm = 3 mA. Lowering R1 will also reduce the voltage gain though. As an input for a low distortion phono amplifier, I have my doubts about this circuit. To really have low distortion you cannot beat feedback. A good solution could be a JFET based differential pair to provide a limited amount of gain. Then an opamp or opamp like circuit and overall feedback. That will hands down beat this circuit distortion wise.
H: LM2576 in car environment I have a LCD driver board and it uses LM2576 for voltage regulation/step-down. As being cheap ebay/aliexpress purchase the instructions state 12V DC input without specifying range. Datasheets I found for LM2576 state unregulated input voltage can be up to 40V. So the question is whether the LCD driver would work in car environment where 12V is really something like noisy 12-15V. Any recommendations for extra components like capacitor before the board? Edit: Managed to find more specifications of the board (PCB800099 was the keyword) and there 5-24V is mentioned. AI: Unless you have the schematic of the board, or make a bit of reverse-engineering, you can't assume there isn't something else than just the LM2576 connected to the input supply. There are at least the input capacitor (which has a rating most likely well under the 40V), and there may be some additional monitoring circuitry, or protection circuitry on the board too. So the only thing you can trust here (assuming you can trust something from a cheap ebay/aliexpress product) is what it tells you in the description: 12V. Moreover, the battery voltage in a car is much more unreliable than what you seem to think. It is not guaranteed to be within 12-15V. It can actually go up to 100V peak during load dumps. So I would choose components that can take at least 20-30V for normal operation, coupled with some beefy protection that will be able to clamp or disconnect any higher input voltage, up to 100V. This could eventually be implemented with an intermediate circuit between the battery and the LCD board you selected. There are a lot of documentation material available about automotive load-dump protections. Here are a few application notes, for example: from ST from TI
H: Clarification required on DALI Standard 101,103 I want to develop a complete DALI Control Device with two important features, one being "3 DALI buses on the device" and other being the "RGB colour control feature"! Already, I have successfully designed a DALI Master-Slave prototype system using two STM32s. As my next step, I want to make my DALI Master compatible with the available commercial products(DALI Gears). Also in parallel, I would like to buy essential DALI standards. Q1. What are the essential DALI standards for my purpose? I was going through these two (System Components- Part 101, Control Device- part 103) standards and I could not find any info related to RGB colour control! Rather it is noted that part 101 works in conjunction with part 102 and part 209 describe the colour control! I certainly cannot afford to buy 4 standards(101,102,103 and 209) now!! Q2: What are the designated commands for RGB colour control?? or Can I just use some random un-allocated(reserved) commands for this purpose? Q3: Are 102, 209 required for my purpose? I will not be developing DALI Gear rather just DALI control device! I am assuming these two define how the DALI Gear should be designed!! Essentially, I cannot afford to buy 4 DALI standards rather at max 3! In general, any suggestions regarding this project will be appreciated. EDIT: What are the essential standards for me to get started? Maybe over some time, I could purchase all 4 but I am on the tight budget constraint. I could only ask for more money when I show some practical results with the above two features. Is there any workaround? PS: I am not interested in getting the product certified anytime soon! I want to test it first for various practical scenarios. AI: Q1: Your product needs to comply with Parts 101 System and 103 Control Device. But you need to read and understand Parts 102 Control Gear and 209 Color Control so that you know the commands that are available - the format of those commands, the use of DTRs, the requirements to send twice and/or to wait for responses etc. Q2: If you don't use the commands listed in 209 for colour control, you won't be able to control commercial control gear which implements 209. You cannot use reserved commands and pass the compliance tests, but there are ways of communicating on a DALI bus which are allowable manufacturer specific methods such proprietary frame size (number of bits or bit timings) and Operating Modes. Q3: see answer to Q1. The cost buying standards can be reduced by joining the national standards body, but this has its own cost so you would have to decide if it was worthwhile. The cost of the standards is a small fraction of the cost of developing and certifying a DALI product - if you want to use the DALI name or logo on the product, you also have to join the DiiA and pass the official test sequences. These are to ensure that all products are interoperable. Edit: if you are not interested in compliance at this stage but just want to get some functionality working, then part 103 is the most disposable part because you are only implementing a system which has a single control device on the bus, and part 103 is mainly about different control devices communicating with each other. The various micro manufacturers NXP TI ST MCP have DALI App Notes which often include (usually out of date) information which covers the essentials of part 101 and 102, so that might be sufficient to get started. However, you are less likely to find any coverage of part 209 outside of the standard, so you probably need to purchase that one at least.
H: How good is to use this zener diode for reverse polarity protection? In the following circuit how does the zener diode D1 provide reverse polarity protection? Lets say V+=12V and V-=-12V and the zener is 36V zener. It means for correct polarity zener will not pass any current since it sees 24V across itself. But in case of reverse polarity, the zener will short the opamp maybe protecting it, but wouldn't that also mean it will short the power supply? Is this a good reverse polarity protection? AI: Protection relies on two things: the voltage must not exceed a certain value the current must not exceed a certain value This is valid for all components including the protection devices themselves ! In your circuit the Zener diode will go into forward mode when the supply is reversed. For a BZX55 Zener diode I just looked up the forward voltage, it is 1.5 V at 200 mA. But the opamp is an IC and it is bound to have ESD protection diodes on each pin, it is a circuit like this: Those diodes (present on each pin!) will also go into forward mode. Since they're Silicon diodes they have a forward voltage of around 1 V to 1.5 V (for two in series). So the ESD diodes might even start to conduct before the Zener kicks in! So it might be that the Zener doesn't to anything, all the reverse supply current goes through the opamp! Ouch! Also, if you do not limit that current to a sane value, like 500 mA, preferably a lot less, you will destroy the opamp. So: the zener might not do anything because of the ESD protection diodes. Without current limiting on the reversed supply voltage, something will blow eventually. My guess: the opamp goes first, if it fails open then the zener will be destroyed.
H: highfrequency signal on ground I designed a PCB where I have a local oscillator Input to an IC. The LO signal (2.48 GHz) has a power of -20 dBm. I put a cut SMA cable on the ground plane and measured a signal level of -60dBm. Is this normal? AI: At 2.5 GHz, the wavelength in free space is only about 120 mm or 5 inches. That means if your distances exceed about 12 mm or a half inch, you don't have a lumped system anymore. It's not clear what exactly you did, but it seems that you are measuring some signal at -40 dB where you expected none at all. First, none is never realistic. Second, this not being a lumped system, exactly how you measured it is significant. The fact that a 2.5 GHz signal shows up somewhere else nearby attenuated by 40 dB doesn't by itself seem surprising. You say you measured this "on the ground plane". This brings up the question of how exactly you measured this, which you haven't stated. You also seem to be under the false assumption that every place on the ground plane should be at the same voltage all the time. That's the desire, and is often a useful approximation, but at 2.5 GHz you can't just wave your hand at such things. The exact geometry matters, as does the route of the return current to this oscillator.
H: Why is there a little nose on the PCIe connector? PCI express cards have an edge connector with a little notch in it to prevent the card from moving if the socket is longer than the connector. Additionally, the printed circuit board has a protrusion in front of the actual connector. What is its purpose? Depicted below is a PCIe card. You can clearly see the protrusion I mean to the left of the edge connector. Picture from Wikimedia Commons by Clemens PFEIFFER, Vienna. AI: According to the PCI Express Card Electromechanical Specification 1.1 it is to prevent insertion into a standard "non-express" PCI socket. http://read.pudn.com/downloads166/ebook/758109/PCI_Express_CEM_1.1.pdf [page 72]
H: Equivalent electrical model for induction motor For those people who are looking for induction induction motor SPICE model, please follow the link LT SPICE tools and applications I'm trying to understand the model characteristics and from the image attached, I don't understand what the terms and where they came from: V=-Np*V(w)sdt(V(Yr)) V=NpV(w)*sdt(V(Xr)) The constants from the below parameters are hiding somewhere in the model and I couldn't find where they are. .param N=475 +Bs=1.8 Br=0.5 Hc=40 +A=1m5 Lm=0.2 Lg=1m5 I'm designing power electronics for 3-phase 3KW SR motor and I'm trying to model an equivalent SR motor circuit that can be used in LTspice to simulate the results. For simplicity, I'm trying to first simulate only 1 phase of the motor. To do that I'm only using upper circuit model with input X and not considering the secondary phase with input Y(assuming this is right way) from the below image. But the equation V=-Np*V(w)sdt(V(Yr)) has parameters from the secondary phase. How to define the model without secondary phase? Thanks in advance. AI: I don't think you're going to get far without finding the documentation on the motor model from which this electrical model was derived. Ultimately it is up to the author to decide on the model features. The meaning of the symbols however can mostly be deduced, even if their impact on the model is not clear. V=-Np*V(w)sdt(V(Yr)) V is the voltage of the behavioural source Bx. Np is the number of stator poles, V(w) is the voltage at the point w in the circuit, which in this case is meant to represent the angular speed (\$\omega\$) of the motor. sdt is the integral and V(Yr) is the voltage at the point Yr. Similar deduction applies to V=NpV(w)*sdt(V(Xr)), the voltage of behavioural source By. The directive .param N=475 +Bs=1.8 Br=0.5 Hc=40 +A=1m5 Lm=0.2 Lg=1m5 gives a number of constants to be used somewhere hidden in the model. They look like magnetic properties of the motor. N would be the number of windings on the stator, Bs would be the magnetic flux density of the stator, Br of the rotor. Hc would probably be the magnetic field strength of the central magnetic path. A is likely to be the cross-sectional area, Lm the magnetising inductance and Lg the inductance of the air-gap.
H: Can't program multiple chips using JTAG SOLVED - LOOK UNDER "EDIT 3" SUBTITLE. I've got this two chips: Board with LPC4337(left) and board with ATSAM3X8E(right), JTAG interconnected, using FT2232H chip soldered on the board of the left as an interface. The first one (LPC4337) can be programmed. The second one (ATSAM3X8E) can be programmed too as long as traces between the programming chip (FT2232H) and the LPC4337 MCU are cut. If no traces are cut (both LPC4337 and ATSAM3X8E connected to the JTAG) then I can't access the ATSAM3X8E chip. I'm using OpenOCD. This is OpenOCD output when both chips are connected to the JTAG: ... Info: JTAG tap: lpc4337.m4 tap/device found: 0x4ba00477 (mfg: 0x23b (ARM Ltd.), part:0xba00, ver: 0x4) Info: JTAG tap: lpc4337.m0 tap/device found: 0x0ba01477 (mfg: 0x23b (ARM Ltd.), part:0xba01, ver: 0x0) Info: JTAG tap: sam3.m3 tap/device found: 0xfffffff (mfg: 0x7ff (), part:0xffff, ver: 0xf) Error: sam3.m3: IR capture error; saw 0x0f not 0x01 Warn : Bypassing JTAG setup events due to errors Error: Invalid ACK (4) in DAP response Error: Invalid ACK (4) in DAP response Error: Invalid ACK (4) in ... (many more) Error: Invalid ACK (0) in DAP response Error: Could not initialize the debug port Some notes: If I change OpenOCD config to ignore the "IR capture error", the connection fails anyways, the first problem is the sam3.m3 TAP ID being detected as 0xfffffff etc. Same output with LPC4337 JTAG traces cut (only ATSAM3X8E connected): ... Info: JTAG tap: sam3.m3 tap/device found: 0x4ba00477 (mfg: 0x23b (ARM Ltd.), part:0xba00, ver: 0x4) Info: sam3.m3: hardware has 6 breakpoints, 4 watchpoints Some notes: This is a successful connection. An interesting thing is that both LPC4337 and ATSAM3X8E share the same JTAG TAP ID. Isn't this value chosen by the manufacturer? Is having different chips with the same JTAG ID a strange coincidence? May this be the problem preventing me from connecting both of them to the JTAG? I've seen repeated JTAG IDs being handled by OpenOCD before, but always with different instances of the same chip, not from different chip families like in this case. Is there a way to change chips JTAG ID? What other problems could be causing this behaviour? I'm interested to hear any kind suggestions, even if they turn not to be the solution they might help. Some other notes: I've tried with two different ATSAM3X8E boards so I know the chips aren't faulty. Both boards were Arduino Due and worked when LPC4337 JTAG traces were cut. I'm running OpenOCD version 0.10.0 ("Freddie Chopin" compiled version) on Windows. I haven't got an oscilloscope. I would like to work with both MCUs without adding more modifications or cutting traces. I would also like to understand the fault that's preventing me to program these chips in case I design something in the future using them. Lastly, I would like to thank the community for their time, I hope this post helps some other people in the future! EDIT: These are the JTAG interface schematics of both boards that were connected together. The flat cable is connecting Pin1 of "P3" connector on LPC4337 board schematic with Pin1 of "JTAG" connector of ATSAM3X8E board schematic, etc. (All 10 pins connected accordingly, tested with continuity with a multi-meter). This is LPC4337 board schematic. This is ATSAM3X8E board schematic. EDIT 2: Thanks to the answers I found out I was using a star JTAG topology but a daisy chain JTAG topology would be the standard approach. I will change my wiring and post my results. Thanks for the help. This will be my wiring topology (but with two devices): EDIT 3: JTAG was changed from a star topology to a daisy chain topology. LPC4337 TDO is routed to the connector and eventually to ATSAM3X8E TDI. ATSAM3X8E TDO connects via the connector to the FT2232H interface. When no device is connected and only the LPC4337 is being programmed, a little male connector (pictured below) can be plugged (instead of the cable) and bridges LPC4337 TDO directly to the FT2232H interface. OpenOCD connection succeeds and outputs the following: ... Info: JTAG tap: lpc4337.m4 tap/device found: 0x4ba00477 (mfg: 0x23b (ARM Ltd.), part:0xba00, ver: 0x4) Info: JTAG tap: lpc4337.m0 tap/device found: 0x4ba00477 (mfg: 0x23b (ARM Ltd.), part:0xba00, ver: 0x4) Info: JTAG tap: sam3.m3 tap/device found: 0x4ba01477 (mfg: 0x23b (ARM Ltd.), part:0xba00, ver: 0x4) Info: lpc4337.m4: hardware has 6 breakpoints, 4 watchpoints Info: lpc4337.m0: hardware has 6 breakpoints, 4 watchpoints Info: sam3.m3: hardware has 6 breakpoints, 4 watchpoints (the 0x4ba01477 TAP ID turned to be the ATSAM3X8E, I wonder if OpenOCD could be arranged in a different order [atsam, lpc m0, lpc m4] and work in a star topology or not) Thank you all for your help! AI: The comments suggest you are trying to operate in 'star topology', but as far as I know, that isn't a well defined implementation. Please confirm you're using daisy chained topology, where the multiple TAPs make up a single long shift register. Debug tools ought to cope with this since it sometimes occurs implemented in a single chip. The TAP IDs that you quote are for the debug interface, they don't (and don't need to) identify the individual hardware. All they define is that these components use Arm CoreSight JTAG-DP, which provides an interface to memory mapped debug components. If you access the debug components in that memory map (which are discoverable from the registers that the DP provides), then you should find an ID value which ultimately identifies the chip. The JTAG-DP IDCODE register is described here in ARM DDI 0314H.
H: Why are chip designers called "triangle pushers"? I heard chip designers being described as "triangle pushers," the idea being that somehow the logic on the chip was formulated by arranging triangles on the silicon in certain ways. How does this work? I don't understand how triangles can be arranged to create digital logic or why the shape of a triangle would be important. AI: Early masks for the creation of layers on an IC were created by a photographic process that involved exposing the original photographic plate through a mechanically controlled triangular aperture. Hence triangle pusher. The light source was fixed above the aperture, the plate was moved xy underneath. The point of a triangle was that additive it could give any orthogonal geometry required. There were no laser printers back then.
H: Single supply op amp to convert 0-5VDC to +/-2VDC (microcontroller to line level) I am generating a 0-5 volt sine wave using an arduino with a DAC. I would like to level shift this signal (DC bias?) by -2.5 volts and have a gain of .8, so that I end up with a line level sine wave. Thus I am trying to design a single supply op amp with the following equation: $$V_{\text{o}} = (V_{\text{i}}-2.5) \times 0.8$$ I found this white paper that is supposed to describe a way to do this but I'm having trouble following as I am new to op-amps. https://www.eecs.umich.edu/courses/eecs452/Labs/circuit4.pdf Does anyone know of a way to do this with a single op amp or IC? The frequency range of 20Hz to 20KHz must be preserved. AI: You have the following situation: Arduino with DAC generates 0V - +5V sin wave (2.5V +/- 2.5V). The output should be -2V - +2V sin wave (0V +/- 2V). POSSIBLE SOLUTIONS: solution #1: Arduino with DAC + C1 + R1 + R2 (all elements are in series). C1 + R, where R = R1 + R2 is a high pass filter R1 and R2 is a voltage divider with the gain of 0.8 V/V can be used if DAC is able to source/sink enough current . solution #2: Arduino with DAC + opamp buffer + C1 + R1 + R2 (all elements are in series). use in case your DAC is not able to source/sink enough current . solution #3: Arduino with DAC + opamp subtractor + C1 + R1 (all elements are in series). C1 + R1 is a high pass filter A single opamp in a configuration of the subtractor takes care of your equation: Vo = (Vi-2.5) * 0.8 Look here: http://www.mtcmos.com/subtractor/ . You would like to use the configuration at the end of the article, that is, the left from this picture: You set: R2/R1 = 0.8 V1 = 2.5V V2 = Vi So you get: Vo = 0.8 * (Vi - 2.5) You may also refer to this e-book https://payhip.com/b/5Srt. The subtractor circuit explanation is a bit improved there (the core is the same), but you also find other configurations explained.
H: Are internal pull-up resistors in microcontrollers connected to Vcc internally? I'm partly on my way into my first project with the ATMega32U4 and Im trying to understand the internal pull-up resistor. After lots of research, Im struggling to find an outright answer to my question but from my reading its been implied that the internal resistors are connected to an internal power rail. Is this correct? And therefore if I just connect a switch to ground and the pin will I be able to detect the button push? To make it clearer: Option 1 Option 2 AI: The internal pull-ups (and pull-downs, if they also exist) are usually relatively weak and are made from PMOS (or NMOS) devices. They usually can support something on the order of about \$30\:\mu\text{A}\$. When you enable one of these, the software action simply applies an appropriate gate voltage to the device so that it allows a small current to flow. For a pull-up, one side of the device is tied to the \$V_{CC}\$ you also provide via a separate pin to the device. For a pull-down (if available), one side of the device is tied to the ground you also provide via a separate pin. Either way, the other side of the device is tied to the I/O pin. You can use the internal pull-up to provide a weak current source for use with an external switch. So it is possible (and not infrequently done) to use it with an external switch without having to add an external resistor. However, this internal pull-up is usually weaker than an externally added resistor (\$\approx 100\:\text{k}\Omega\$ for the internal pull-up versus a not-uncommon \$10\:\text{k}\Omega\$ used when adding an external resistor.) For a simple push-button that is kept close to the MCU, it may not matter so much. But if the switch is placed far away or is used within a rather noisy environment, you could be in trouble using the internal pull-up. Also, if you use the I/O pin for both IN and OUT (muxing it for some reason) then again there may be a reason for something placed externally, instead. In your case, with an internal pull-up, you can simply hook one side of the switch to the I/O pin and the other side to ground.
H: Costs of Conductive Silicone/Rubber Keypad vs. Mechanical Switches? I am toying around with some new mechanical design ideas. One thing I've never looked at it is the rubber keypads with the conductive "pucks" underneath. I've always used mechanical switches, or overmolded silicone buttons onto the finished enclosure for waterproofing. Example of the conductive rubber keypads: http://www.rubber-keypad.com/Conductive-Keypad-pd6229345.html I take it the manufacturer gives you a footprint that matches the conductive "puck", and the footprint goes to GND so when the puck hits, your logic line goes low. Does anyone have any experience implementing these? Are there any gotchas or things to watch out for? Any experience on the cost side? AI: Does anyone have any experience implementing these? Are there any gotchas or things to watch out for? I have tested a lot of tactile/membrane keypads in the past and the biggest issue I found was bounce time was very variable between one manufacturer's product and another. This can be annoying if interfacing with a spcific chip (like a DTMF encoder in a telephone) because you quite often got double digits when you thought you'd only pressed once. If you have a high volume product or a product that gets a lot of keypad use I would seriously consider a lot of mechanical testing of different supplier's products if they do not have a technology that guarantees a closed resistance within a certain time period of the button being pressed. Same when releasing the switch - it can bounce then. Some keypads I tested that had a seemingly beautiful tactle click didn't actually make contact until you pressed a little harder. Now I'm sure the inducstry has moved-on from those days back in the late 1980s but caution should still be your watchword. Any experience on the cost side? They are cheaper on production costs for low to medium volume but don't ignore the time and effort into guaranteeing a good design. The main reason for using them is of course that they can be designed to have buttons in irregular positions i.e. they are easily customizable. Not wishing to counter anything said by anyone else (@Jonk) but a good technology should give you over ten million operations. We (back then) modified a motorized hack saw like this: - It produced a repetitive forward and backwards stroke and we used a spring/cushion to set the impact force onto the target keypad. We easily got ten million operations from quite a few but very few could meet the debounce times at end of life.
H: AD7545JN problems I have a couple of AD7545JN DACs that I want to drive some proportional vales with, using an Arduino Mega. My main problems is that I'm only getting about 8V at OUT1 of it with Vref at 10V. And I'm getting some strange noise I can't quite explain. This is a capture with the Mega ramping up from 0 - 4095, yellow line is output and green is Vref DB0-DB11 is connected to Pin22-33 on the Mega WR&CS is tied together to Pin 2 A&DGND is tied to GND RFB is floating Vref is at 10V I have tried putting Vdd to 5V from the mega and together with Vref. There is obviously something I'm not understanding, so any advice is appreciated AI: It's a current output multiplying DAC - you don't operate it it into an open circuit. It's meant to feed a transimpedance amplifier like this: -
H: Can I share a 3-phase motor branch with a small single phase load? I have a 15hp 3-phase motor on a 50A 120/208 wye circuit. I also have a small 1A dc motor with controller and some monitoring circuits - this secondary system needs 24vdc and 1.5A altogether. I have an ac-dc converter that's rated up to 240VAC and about 1A input current to provide the DC power. Is it safe to wire the converter with the incoming 3-phase for the motor? I figure since the starter coil is using L1/L2 I can connect the ac-dc unit to L2/L3 and have a balance on the 3 legs. The motor runs for a max of about 1hr continuously, it's fla is 39A. Most of it's operation is for short durations - about 10min at a time. If this is acceptable are there any extra precautions other than fusing the secondary unit? Also, in case the large starting current cuts the DC power, what would be a good solution to avoid the voltage dips besides a large capacitor? AI: That seems ok. You may want to check to make sure there is not a code violation, but I believe that is permitted. It would be best for the capacitor to be designed into the DC supply rather than added on. The supply may already have sufficient capacitance. Also, if the input voltage range is wide, it may be ok as is.
H: Is there any reason that any speaker connected to any computer couldn't be used for listening/surveillance? I understand that it's possible to use the headphone port of a laptop as a microphone, but what are the physical limitations to using any speaker in the same way? I'm curious about what makes amplification a one way street and if it is necessarily always the case. AI: If active speakers (i.e. speakers that require power) are connect to a computer then the amplification circuitry is a one-way street and it would be impossible for microphonic signals to be received by the computer. If passive speakers or headphones (these are essentially the same) are connected to the computer then there is a more complicated answer. Modern computer audio codecs often support multi-purpose audio jacks. This allows each jack to be software configured to function as: headphone output, microphone input, line out output, etc. It is therefore possible for headphones connected to what the user presumes to be a headphone jack, to function as a microphone. Such a hack has already been demonstrated: http://vrzone.com/articles/new-software-lets-hackers-turn-speakers-microphone/117315.html This hack is only possible if the headphones are not being used to output sound. It is plusable that an audio codec could be hacked to have a jack simultaneously function as a microphone input and audio output. However, any microphonic current induced will be absorbed by the output driver so that the microphonic signal is never manifested as a measurable voltage. Even if the audio output was 'silence', the output driver would still be driving a zero voltage signal and absorbing microphonic current. In summary: Active speakers cannot be used for listening/surveillance. Passive speakers or headphones can be used for listening/surveillance but only if they are not being used to output audio.
H: Bridge rectifier with no load draws too much current I have a circuit that includes a bridge rectifier and a transformer. Values are, Transformer secondary 4.7v x 2 9.4v AC Diodes 1n5408 (3A Rectified Output Current) Rectifier connections are right and i measure from anode anode junction to cathode cathode junction. Multimeter reads around 15V DC When i put an amp meter between one of the outputs of the transformer and bridge rectifier input it reads 1.7 Amps. In the continuity test there are no short circuits. Can you think of a reason why is this happening? I would expect around 6.5 or 7 volts DC at the rectifier out and next to none amps in the amp meter. Thank you very much. (By the way this case happens when a reservoir capacitor connected or a load connected also. Transformer itself without the rectifier works as expected.) AI: When i put an amp meter between one of the outputs of the transformer and bridge rectifier input it reads 1.7 Amps. Randomly probing around with a ammeter is not a good idea. Remember that a ideal ammeter is dead short. It's not clear what you are doing here since both outputs of the transformer are also inputs to the rectifier. If you are getting such currents with nothing connected, then one of the diodes is blown or connected backwards. The open circuit output voltage seems a little high, but maybe your 9.4 V transformer output voltage is the rated voltage under load. With a little capacitance on the output of the rectifier, it will go to the peak voltages of the input waveform. For a sine, the peaks are sqrt(2) higher than the RMS value. 9.4 V RMS would therefore mean 13.4 V peak. There are two diodes in series between that and the output. Under normal use, each silicon diode drops about 700 mV, so the output should be about 1.4 V less than the peaks of the input, or about 12 V. However, that's with some reasonable load. At no load, the transformer is probably putting out a few volts higher, so the output is also a few volts higher. Basically, there doesn't seem to be anything wrong here.
H: Learning the Art of Electronics - diode clamp Working through Learning the Art of Electronics, Hayes T, I'm stuck trying to build the diode clamp in Figure 3L.7 on page 137, as below: The text says to, "Drive it with a sinewave from your function generator... ...and observe the output." So presumably +ve from the function generator connects to "in", +ve from the power supply to "+5 volts", and scope probe to "out". Having searched the web and studied various clamp schematics, I still can't work out the answer to my question which is, where to put the ground leads from function generator, power supply and oscilloscope? Thanks David AI: I would draw the schematic like this: simulate this circuit – Schematic created using CircuitLab That (for me) makes it easy to see that the diode will start to conduct when the voltage at out exceeds 5 V + the diode's forward voltage, so at about 5.6 V.
H: Need to run a mobile robot on batteries I made a 4wd mobile robot with an 4 DoF robotic arm on it and currently in process of designing a power board for my robot. I would like get a detailed road map on designing a power board for the robot. Here is a brief overview of my hardware: The robot has 8 joints which each of the motor runs with 12V (stall current: 800mA, nominal run: ~200mA), The control board works with 5V (nominal current draw: 500mA, max: 1.5A), Overall(averge) energy consumption of the robot is around 6 Watts (max 18Watts). Here is what I have done so far for powering the robot: I am using 2 ncr18650b battery in series with a cheap of-the-shelf battery management system (BMS), so I do not kill the batteries while over-running the robot, Nominal voltages of these two batteries are 7.2V where I use one boost converter for driving the motors with 12V and a buck converter for driving the controller board with 5V, I have to unplug the batteries when ever I need to charge them (since I do not have a charging circuit for the batteries yet). Since I was working with the rest of the robot, I just built this battery setup in two days and it was working and enough. Now I would like to focus on developing a professional power management module. Here is simply what I need (need to understand) for designing the power circuit board: I would like to power the robot with 2 18650 batteries in series, the batteries (in series) should be charged without unplugging (from a 12V DC input port), the robot can also be powered on while charging the batteries, I would also like to monitor the batteries, I would both want to supply 5V for controller board and 12V for the motors from the same battery unit. I encounted a problem with my cheap dump setup which; if motors draw much current from boost converter (12V), buck converters 5V drops. So, I would also like to prevent this happening. As I am going to through the ICs and datasheet, I assume I need: a 2S battery charging IC with a power path management (eg: bq24075) to also power the robot while charging the batteries, a 2S battery management IC for balancing/protecting/monitoring the batteries (which are in series), a boost converter for 12V, a buck converter for 5V, I am fairly experience with designing circuits but never worked on a power management circuit for batteries. I will appreciate if you give me detailed road map or resources for reading (schematics/articles etc.). EDIT A conceptual block diagram AI: Define Efficiency, Features for Active or Passive Balance, cost, complexity, time budget, spare parts , temperature rise, thermal design of each block in addition with electrical design with max load while charging and operating. ( which could be almost twice the 1C current rate or ~ 6.7A !! and that BMS chip is limited to 1.5A) So ALWAYS start with Specs before you waste too much time. Battery background https://www.digikey.com/en/ptm/t/texas-instruments/introduction-to-battery-management-part-1/tutorial If starting power at full voltage is 0.8A*12V = 9.6W per motor. What software restrictions limit 5 motor starts at full acceleration? I suggest you have PWM control V/f of motor slew rate controlled to limit current which will also improve stability. Assuming you have done this and peak power is 18W with 2 cells at 3350 mAh typ each @ 3.6V = 12Wh * 2 = 24 Wh max then your max load will be slighly more than a 24/18 = 1.33C load rate. Which seems reasonable for experimentation at max load. Batteries will be very well balanced (<<1%) when fresh and degrade very slightly then exponentially before EOL (end of life) Meaning that charge balance by cell monitoring is necessary for internal charging. Try to keep between 20% and 90% State of Charge by useage. 90% is shortly after going from CC mode to CV mode. But if you have a good BMS chip, follow their advice. I only suggest smaller DOD range to extend battery total Wh*charge cycle life span. Using 50% will double battery life capacity. Using a good Full MOSFET bridge with a good BMS and good software for slew rate control helps on the servo side while braking also recycles charge to battery. It depends on your requirements.
H: Will a discharge resistor help here? I have a simple circuit, powering an ESP8266, specifically the esp-12-e. Connected to the 3.3v coming in I have a 1000uF electrolytic capacitor, to help keep things stable (suggested by the esp8266 community) It's all working well. However when I cut the power to the board, I see my LEDs etc hang around dimly glowing for just less than a second before the board appears to loose all current. I assume this is because of the giant capacitor. I like it :), and have it behaving otherwise very stably. I hear people talk about bleeder/discharge resistors, normally on DC power supplies. However, if my theory is right, one may help me here, so when I cut the power, everything turns off...more or less instantly. Thoughts? If my theory is correct, how would I wire the resistor up, how many ohms should it be, and why? AI: To start with, you could reduce the decoupling capacitor size. It does seem to be an enormous capacitor for the job it's doing, so you could reduce its value. The size required depends on the decoupling capacitance you already have in your power supply and the length and current capacity of those connections to the power supply. The shorter and thicker the connections (lower resistance, inductance), the better. I previously worked with on of four rigs containing a ESP8266 board which worked reliably with a 100 uF capacitor close to the board supply pins. So it all depends upon your total circuit. If you have the parts available, you can try 470 uF or 220 uF and see if you notice any difference over time when its operating. If you have access to an oscilloscope, measure the ripple, noise and supply dips across the 1000 uF capacitor and then measure it with the 470 uF in place and again with the 220 uF. If ripple, noise and the supply are not significantly larger with the 470 uF or 220 uF, use that smaller capacitor. Once you've settled on the capacitor you want to use, you could add a discharge resistor to do what you want. A discharge resistor would be connected across the supply rail, and therefore the supply decoupling capacitance. Its value is a trade-off between operating power wastage and speed of discharge. When the supply is operating, the resistor will be drawing a continual current and dissipating power as heat. The lower the resistor value, the more power from the supply is wasted as heat in the resistor and the higher its power rating has to be but the faster the capacitor has to be. The resistor current is found from: I = V/R = 3.3/R Amps The resistor power is found from: P = V*V/R = 10/R Watts The 1000 uF capacitor discharge time because of the resistor across it can be taken roughly from: td = 5RC = 0.005 x R secs For example, a 100 ohm resistor would draw 33 mA, dissipate 0.1 W continually and discharge your 1000 uF capacitor in 0.5 secs. That seems a reasonable trade-off but it depends on your power supply's rating as to how much you want to waste in the resistor. Derate the resistor to 50% so it is not under stress. So for the 100 ohm resistor, use a 200 mW or 500 mW part.
H: Improving the quality of an optical audio transmitter I wired up a very simple device in which I use an audio signal transformer (EI14) to modulate the intensity of a laser beam (a cheap 650nm, 5mW diode) according to the audio output of my phone. The laser is then received by a photo-resistor (G5528 A205) wired up to the microphone socket of my laptop. With this setup I can transmit audio between the two devices. Not surprisingly, the results are not astonishing. The audio quality is less than crystal clear, but that was expected. Here's an example. My main concern that this system may be fundamentally limited is that it's not the data that I'm sending, from which the audio could be recreated; what I'm sending is just some real-time 'amplitude'. I'm actually surprised how well this worked since my intuition is that music consists of a range of Fourier frequencies and each point in the duration of the song comprises of some combination of those frequencies at corresponding amplitudes. At the same time, with a monochrome beam I'm limited to just varying the brightness of my beam and hence only able to send one amplitude at any given time, i.e. only the information about one of the Fourier frequencies (presumably the dominant one?) can be transmitted. Is this reasoning fundamentally correct and does that mean that a monochrome beam can't faithfully transmit a complex sound in real time? My other questions are: what improvements to my very simple setup are possible to make the audio quality better? What are the limiting factors? Is it even possible to get a reasonable quality in such a system? AI: I'll just point out one contributor to your audio distortion: Figure 1. Extract from the PGM 5506 datasheet. You didn't supply a link to your LDR datasheet but it's probably similar to the one above. As you can see the response times are in tens of milliseconds so at best they can only respond reasonably well to about 20 Hz. Anything above that will be "slew-rate limited" meaning that a step change in light intensity will give a response much like a resistor-capacitor (RC) charge / discharge curve. Figure 2. RC charge / discharge curves for a squarewave input signal. Source: EEEGuide1. The LDR will give a similar response although the lower trace may be inverted depending on how you've wired it to the receiver input. You might need to advance to a photo-transistor or photo-diode solution. One other thing: you should have a DC decoupling capacitor between your receiver and the mic input of your laptop. This will remove any DC component and may prolong the life of your laptop.
H: External to internal antenna setup doesn't increase signal power For a larger project where I basically want to boost phone signals indoors using an external antenna I have devised a small experiment to make sure I get the basics right before I expend a lot of energy. Basically, I have made an enclosure to reduce signal strength in which I put a cell phone displaying the current RSSI or dBm and a small-scale version of the external and internal antenna setup. Basically just two small antennas connected by a wire. The idea was that I should be able to measure a difference in signal strength inside of the box when putting a small antenna connected to another antenna outside of the box. This would then validate the feasability of using the bigger antennas. Step 1 was to make an enclosure that would block (some) signals. A card board box wrapped in aluminium foil was no true Faraday cage, but when closing the lid it managed to reduce the signal strength by 15dB, which is noticable. Step 2 was then to verify that these antennas would boost the signal inside of the box. This is where I fail, somehow. While closing the lid of the box made the signal go from -90 to -105 on the dBm scale, putting the antenna in there made absolutely no difference - no matter how long I waited. I suddenly realized that I might be reading the wrong figures, as I still had 4G enabled (the antennas are both for about 900 MHz, while 4G is 1800 and 2100 MHz in Norway), but disabling 4G and strictly running on 2G made no real difference. So this leaves me wondering: what am I doing wrong? Are there some basic assumptions I have made on how this should work that are just plain wrong? Debugging steps: I have verified that there's no short in any of the links from the black whip up until the N-connector using a voltmeter (0.6 V reading). I have turned off 4G I have verified that I lose 12-15dB of signal when closing the lid Tech specs on the long whip (900 MHz range) Tech specs on the stick antenna (900 MHz range) The splitter box used to connect the two male cable connectors seems to be fine too when checking using a voltmeter. AI: The large antenna is collecting a portion of the cell tower's energy which falls within its "antenna aperture". Your system is then radiating this energy from the small antenna in all directions. (Not really, of course, but you see the point). Only a small amount of this power is being re-radiated in the direction of the cell phone's antenna. I expect you could detect the improvement in signal strength if your chamber had a higher level of attenuation. But as it is it's getting swamped out by the direct signal which is coming from the cell tower.
H: About Contactor Lifespan Newbie here regarding contactors and relays, i had this contactor datasheet https://www.gavazzionline.com/pdf/CC50-CC65.pdf. I'm trying to identify how long the contactor would last and I find the durability category, what's the difference between mechanical and electrical? does the mechanical durability means how many cycles the contactor would work w/o load and electrical means has load? Also, this durability is for the poles of the contactor? not its coil for activating the contactor? Does coil for activating the contactor has durability also?? I find it hard to read datasheet that are not detailed at all since i have no one here to ask to that has a lot of experience when it comes to these datasheets. I hope you could help me :) As for the follow up question also I have this datasheet http://www.te.com/commerce/DocumentDelivery/DDEController?Action=srchrtrv&DocNm=1308242_PRD&DocType=DS&DocLang=English where i'm looking for the numbers of durability for the contacts and as I go along comparing this to other datasheets, i find most of relay and contactor datasheets don't have durability data in common or i mean standard. I would like the image below, the contactor I saw is on Code Y, Ag, 1 pole and it is capable of 50Amp Load Current. I understand that 50A, 277VAC will have a lifepan of 100k (Hz is not rated if 50/60hz, means it doesn't matter because theres a little difference for affecting the number of cycles it could work?). and here i compared that to a 3HP, 240VAC w/c is also 100k cycles. upon getting the ampere equivalent to 3HP, its around 9.3A only so it'll be 9A, 240VAC = 100k right? why is has lower lifespan compared to 50A, 277VAC w/c has more power rate??) Is there a way to compute if i wanted to know the cycles it could do having 40A, 277VAC or 40A 230VAC (either capacitive/resistive) using the values given on datasheet?? And if 50A 277VAC = 100k cycles, how can i know the approx. cycle if w/ these parameters only 25A 277VAC?? i know it'll be more than 100k cycle but will it double like 25A 277VAC = 200k cycles? Also, on the same datasheet, it has this section. so coil for activating the relay does have durability also?? And lastly, if i have a contactor that has 3 poles, does the durability for the contacts applied separately for the 3 poles?? like 100k cycles for each poles? i mean, i had this idea of using the contactor that has 3 poles to use only 1 pole at a time and if it wears-off, i'll just use the other pole instead of buying another contactor. Is that feasible? AI: MIL-HDBK-217 provides a calculation to help “fine tune” a reliability prediction as it applies to a specific application. It consists of the following formula: \$λ_p = λ_b · π_L ·π_C ·π_{CYC} ·π_F ·π_Q ·π_E \$ = Failures/\$10^6\$ hours. MTBF = \$10^6/λ_p\$ hrs Whereas: \$λ_b\$ = Base Failure Rate (temperature factor) \$π_L\$ = Load stress factor (load level and type) \$π_C\$ = Contact form factor (DPST, DPDT etc) \$π_{CYC}\$ = Cycling factor (cycle rate) \$π_F\$ = Application and construction factor (general relay load rating and armature type) \$π_Q\$ = Quality factor (mil-spec qualification level vs. non mil) \$π_E\$ = Environment factor (environment in which the relay is being used) Below is an OMRON life expectancy on relay contacts that shows the effects of inductive loads. Mechanical Ratings in million operations are no load ratings. Ref MTBF Telednye Crydon When inductive Arcs cause contact temp rise, MTBF drops quickly. This is why Tungsten start surge , inductive stop surge and DC inductive has the lowest rating life. Resistive loads have no stored energy to dump unless tungsten lamps , which have 10x peak cold surge curernts.
H: DALI Command's reply to the meaning of representatives I have some control gears which already have short addresses, are connected on DALI lines. I tried to send Query Short Address (command 269)& verification Short Address (command 268), but I didn't get any response.Is this right? If above is right, I tried to send command 153, I got response 06 and when I tried to send command 144 I got response 04. What is the meaning of these responses? AI: Command 269 Query Short Address and Command 268 Verify Short Address are in the range covered by the Initialise command. Command 258 Initialise starts or retriggers a 15 minute timer, the commands 259 to 270 are only processed within this period. In addition, Command 269 Query Short Address won't give any response if the random address doesn't equal the search address. Command 268 Verify Short Address will only respond with Yes if the data in the command matches the short address of the gear, in the special format of shifted left one place, with LSB set to 1. Command 153 Query Device Type with response 06 means LED device type. Command 144 Query Status has a bitwise interpretation, you need to read the standard IEC62386-102 for the definition.
H: Can a Z84C00 CPU directly drive 74HCxxx series logic? I'm getting confused by the datasheet for the Z84C00 CPU, while trying to work out if I can use it to drive 74HCxxx chips, or if I need TTL-compatible logic (i.e. either 74HCTxxx or 74LSxxx). It's clear that the low level voltages are compatible, but I can't work out what's going on with the high level voltages. 74HCxxx chips require between 3.15V (at VCC = 4.5V) and 4.2V (at VCC = 6V); I don't expect my voltage input to range this far (I plan on keeping it between 5.2V and 5.5V). The output for the Z84C00 is specified in its datasheet (p34) as: Symbol Parameter Min Max Unit Condition VOH1 Output High Voltage 2.4 V IOH = -1.6mA VOH2 Output High Voltage VCC-0.8 V IOH = -250uA This seems to suggest that the voltage varies dependent on the input current on the pin, but isn't the input current entirely dependent on the internal circuitry of the chip itself? How can I work out what the expected voltage of the output will be? I'm also not sure what the subscript "1" and "2" in the symbol name here is used for. I can't see any reference to them elsewhere, and the equivalent NMOS parts don't use the subscripts, just giving a single line (with 2.4V output level). How should I interpret this? AI: I'm also not sure what the subscript "1" and "2" in the symbol name here is used for Look at the end of each line - #1 is for a loading current of 1.6 mA and #2 is for a loading current of 0.25 mA. This means that if your 74HC chips don't take much current (you'll have to check) then you are probably going to be OK with a supply at 5.5 volts. The input currents to HC series are much less that 250 uA but you will need to confirm this in the data sheets of the parts you intend to use.
H: Why am I seeing a low voltage when the switch is open? I'm building a battery test circuit to check the voltage of the source battery and I've added a Sziklai pair of transistors in as I need to be able to shut off the voltage so that I don't have a low current constantly being drawn while the MCU is asleep. I've put in a multimeter to simulate what I'd be getting as an input into an ADC. When the switch is closed I get 4.815V which is what I expect, when the switch is open however, I am getting a reading of 0.108V and I can't for the life of me work out why there is any voltage present. AI: I think that your are seen this small voltage because Q2 'open' model is a high value resistance but not a infinite value. So, there will be a voltage divider in the loop V1, Q2, R1, R2. Can you check the Q2 model? . . . edited to add that, by approximation, v_mesuared = 14V*7.87k/1M ~=0.11 V So, we can have a guess on the open model of Q2 (~1 M ohm).
H: Is this circuit linear or not? I designed a current source circuit for driving LEDs. This circuit works fine and supplies constant 2.5A. I wonder if this circuit is linear or not. simulate this circuit – Schematic created using CircuitLab 1-) If this circuit is linear; VLOAD = 9VDC and Vin = 16VDC and VDS-M1 = 6VDC. The Power dissipated by MOSFET would be as: P = 6 * 2.5 => 15W Rth = 62 °C/W so MOSFET temperature should increase by 62.5 * 15 = 938°C but it only increased by 50°C. So I think this circuit is not linear. Also I scoped the gate voltage using an oscilloscope and there is a switching: 2-) If this circuit is not linear; If this circuit is not linear, Iin and Iout should not be equal but they are totally equal. Am I wrong? How can we characterize this circuit? AI: \$R_{\theta JA}\$ is one small part of the story. The main part of the story is \$R_{\theta JC}\$ (at 1.5 °C/W), because this is the junction-to-case thermal resistance and it adds to the heatsink thermal resistance to give the lowest (normally) path for heat. So, if the thermal resistance of the heatsink is (say) 6 °C/W then the total thermal resistance that matters is 7.5 °C/W (plus the typical figure of 0.5 °C/W for the interface between transistor case and heatsink). A total of about 8 °C/W. Therefore 17.5 watts would heat the junction by an extra 140 °C for a 6 °C/W heatsink. In an ambient of 25 °C this means a junction temperature of 165 °C and pretty close to the 175 °C limit for the device. You also need to ensure that local ambient doesn't rise due to this heatflow. It could easily rise by another 20 °C if the heat is not effectively taken away. So now you might be into the wrong side of the specification and you should expect trouble if the heatsink is only 6 °C/W. Given that you only saw a temperature increase of 50 °C I suspect you are using a heatsink with thermal resistance much lower than what I guessed at. Also I scoped the gate voltage using an oscilloscope and there is a switching. You have a 2 µF capacitor on the op-amp output, and this massively alters the open loop gain of that part. Basically it takes it several steps forward to becoming an oscillator due to the extra (almost) 90 degrees phase shift you have added due to the 2 µF capacitor and the op-amp's output resistance of several ohms. Don't do this. You are also using an LM211 and that part is a comparator - these are not guaranteed to be stable with negative feedback, so this is another problem. You have a few problems here.
H: How does the rollback work in a FIFO? From reading the book titled "Advanced Chip Design practical examples by verilog", I faced some questions about when the Ethernet packet is read from the FIFO and how the rollback works for the Ethernet packet FIFO. The chaper 8.3.4 of the book explains the reason why rollback is necessary in the FIFO in the aspects of reading a packet and processing its CRC; when the CRC calculated from the receiver doesn't match with the pre-calculated CRC attached together with the packet, it needs to rollback the current FIFO write pointer to the rollback write pointer (rollback_wrptr). Here, rollback_wrptr is the write pointer of the FIFO saved right before it starts to receive the packet. And the book says... At the end of the packet, when we find CRC does not match, we assert one-clock wide signal, rollback_wrptr that reloads the snapshot value to the main wrptr of the FIFO. I am wondering why the book only mentioned about the write_ptr but read_ptr and read_ptr_rollback. To calculate the CRC, it seems that the packets should be read from the queue; it will consume some elements from the FIFO, which moves the read_ptr of the FIFO. Does the hardware logic prevent the receiver from fetching the data from the FIFO until the CRC calculation done in usual? It seems that it should rewind the not only the write_ptr but also the read_ptr. If available, could I get some code implementing the FIFO with the rollback in verilog or VHDL? AI: I don't have the book, but it sounds as if this happens when an incoming packet is being stored in the FIFO, and the design spec calls for packets with an incorrect CRC to be discarded, so that only correct data enters the FIFO. (Some other part of the protocol will then note the packet has disappeared, and ask for re-transmission) So data is written to the FIFO as it comes in, in parallel with CRC computation (on the incoming stream, by dedicated hardware, no Reads involved) but you don't know it's correct until the CRC. At which point, rolling back the Write pointer is equivalent to deleting the packet by pretending it didn't exist. And future incoming data will overwrite the discarded values. The Read process in a FIFO is entirely separate, and rolling back a Write should have no effect on it - provided the Read pointer doesn't pass the rollback write pointer. Presumably some mechanism exists to prevent that. Look at the Empty signalling mechanism : I would expect it to compare the Read pointer with the rollback Write, not the leading edge Write, to guarantee this. (And presumably the rollback Write pointer is updated to the leading edge Write if CRC is OK). So what would rolling back a Read be for? In this design, as long as you prevent Reading past the rollback Write, perhaps there isn't any purpose for it. But you could imagine use cases for rolling back a Read - where, for example, one CPU core grabbed a packet then aborted processing it (signalling Rollback somehow), so you can release it to another CPU.
H: Can I use a battery protection circuit for a 3S 18650 with a LiPo 3S pack? I am looking into getting a battery protection circuit for the 3S LiPo pack I am planning on using in my project. Max current draw will be 5-6A. Since I think it's more efficient to buy something prebuilt I was looking on aliexpress for something that I could use. However most of the parts sold there are specified as for 18650 battery pack usage. Here is an example: Protection circuit Can I use this kind of circuit with a LiPo pack? Do I need to connect the battery + to B+, the battery - to B- and then connect 2 wires of the balance cables to B1 and B2, or am I completely wrong? Since the circuit is rated a 8A it will be more than sufficient for my needs Thanks AI: Do not worry about the 18650 part of the spec. That's only a cell form factor. If you have a Li-cell pack that brings out the cell balancing connections, then it'll be just fine. Your assumptions about how to connect it are correct.
H: What is the relationship between field strength and efficiency in a DC motor? This refers to a DC motor with permanent magnets. What is the relationship between the efficiency of the motor and the magnet field strength? AI: To a first order approximation, none. Modifying field strength is used as a speed control on some very large DC motors, and they don't dissipate huge powers and catch fire at either end of the useful range. What it does do is modify the generator EMF at a given speed. Reduce the field and the generator EMF reduces. In a motor, that's called back EMF, so the supply voltage isn't cancelled by back EMF, current increases, torque increases, motor speeds up, back EMF increases, current decreases to the correct level to maintain the new speed. To a second order approximation, there will be effects on efficiency. Too much field will saturate the magnetic circuit. I'm not sure how that affects efficiency in a PM motor, but it certainly wastes power driving the field winding in a wound field motor. Too little field attempts to run the motor very fast, but with little torque. Unless the motor is correctly (very lightly) loaded, it will tend towards stall, which definitely reduces efficiency. And ultimately, bearing/brush/windage friction will consume all the mechanical power leaving none at the shaft. So, loosely, high field is generally more efficient. But if the motor is correctly rated and not overloaded, there should be a reasonably wide range of field strengths at close to peak efficiency. (If you have a fixed field, you can achieve a similar variation in results by winding the motor with more or fewer turns : more turns being roughly equivalent to higher field. If you don't feel like rewinding motors to experiment, radio control model car suppliers sell the same motor wound with different turns for performance tuning)
H: Can a Li-ion battery be discharged in CV mode? I have a load which can also be set to "Battery test". I can easily discharge a Li-ion battery using the CC (contant-current) mode, which is also a condition recommended by the manufacturers. For example, a typical recommendation would be: Discharge CC (18A) to 2.0V @ 20degC I have worked with CC before and it works as expected. But what about CV (contant-voltage)? One can never see that on the specs/recommendations. If I set one knob to CV and use the "Battery Test" setting, it says that I can't use it. So I assumed that the load doesn't really like the idea of CV and I was wondering why? I could switch to another type of load (not "Battery Testing"), where I can actually set the CV, but I'm afraid that this would destroy the battery. Can anyone help me to figure out whether or not I could force CV into the battery and why is there no CV into the battery-testing option? Thank you AI: If you use constant voltage less than the battery voltage the only thing limiting the current is the battery resistance. That, being small, means the initial current is very high and uncontrolled. This will result in overheating and potential fire/explosion. The whole point of CC discharge is to limit that heating effect.
H: Protecting a low voltage (RS485) bidirectional bus from direct mains connection I have a bunch of devices that will sit on a two-wire bidirectional RS485 bus (DMX-like, but not quite DMX). I'm trying to protect against an edge case, where a user mis-wires a ballast, and inadvertently connects mains voltage to the bus, causing the it to become live and potentially injuring anyone else working on it. I will be installing an 'isolating box' between the bus and the user, essentially behind a wallplate, such that the bus itself can never be directly accessed. The connection to the bus itself would be through opto-isolators and associated circuitry (with a separate isolated supply), however I'd like to 'protect the protection circuitry' if possible, such that it won't catch fire and can gracefully recover. Would a couple of PTCs, and a couple of TVSs (for redundancy) as below, be enough to reliably save any low voltage circuitry to the left of the circuit, when IO1 and IO2 are connected directly to 240V mains? Normally, IO1 and IO2 would see ~22v at roughly 2mA. Thank you kindly AI: In the real world you can never make things totally idiot proof. If these things are designed to be installed and then left alone, it is foolhardy to add more components than are necessary to protect it from normal operational faults. If it is meant to be routinely moved around, and or reconnected, then I'd say it maybe more prudent to invest in more protection. Use of TVS though is intended for transients. They will not protect your circuit from being connected to a multi-megawatt power supply. If I really had to do this I would be looking at some form of crow-bar circuit, perhaps using a triac, to detect over voltage, short the line to mains ground, and blow a fuse, possibly a resettable one. Instead of spending development money on making it bullet-proof and reducing the reliability of the unit as a whole, invest time in preparing clear and concise installation manuals and trust that the electrician can read / do his job properly. If you are really paranoid, add paper tags to the low voltage connections indicating "NO MAINS HERE" or "LOW VOLTAGE CIRCUIT". If you are getting lots of field returns then analysing what can be done to prevent that is prudent. However, fixing the connectors and improving the documentation is the better approach. Preventing the problem ALWAYS trumps protecting from it.
H: Zener Diode vs Precision Voltage Shunt What is the difference between a zener diode and a precision voltage shunt? From the name I am guessing that the precision voltage shunt is more precise. I am using the shunts to limit the output of a sine wave generator. AI: If you look at an average zener diode like the MM3Z3V6T1G, and a voltage regulator like the LP2980, you can see the difference pretty quickly. First off, the zener voltage is listed as 3.4V-3.8V. That's already +-5%. Next, you have the IV curve of the zener. Notice how the Zener breakdown curve on the left is not a straight vertical line, but is instead somewhat diagonal. The gradient of this line is the zener resistance. In the attached datasheet, it lists the resistance as 90 Ohms (at 5mA). This means that the voltage will vary depending on the current through it, by almost 100mV per mA. Say you want to use it at 0mA and 3 mA, thats a variance of another 8%. In contrast, the regulator is guaranteed to have an absolute max variance of +-2.5%, and keep that constant between 0mA and 50mA. These are maximums, and the actual value will likely be a fair amount lower. Basically, with Zeners you're almost guaranteed to get variations of +-10%, whereas with actual regulators its more like <1%. Just to add, Andy is correct that this is a voltage regulator, but the same logic applies to a shunt like the LT1389, except that it's even more accurate at a guaranteed +-0.5%.
H: Why is an opamp with single corner frequency more stable? I read that freq. compensation is done to adjust opamp's corner freq. to single one. A text mentions: .... Freq. compensation does that i.e reduces corners to one for stability: But why is an opamp with single corner frequency more stable than the one with two or more corner frequencies? AI: But why is an opamp with single corner frequency more stable than the one with two or more corner frequencies? An op-amp with no break (corner) frequency (ideal) produces an open-loop phase shift of exactly 180 degrees in an inverting circuit and is inherently stable with most configurations of feedback resistors/capacitors. However, you can still turn an otherwise ideal op-amp into an oscillator by externally adding two corner frequencies. Externally generated corner frequencies behave exactly the same as internal ones; they each shift phase by 90 degrees and reduce amplitude with frequency at a higher rate. A single break frequency shifts the open-loop phase response by 90 degrees and this means it still cannot become an oscillator when conventional negative feedback is applied because, for it to oscillate, it requires another 90 degrees to turn negative feedback into positive feedback. Two break frequencies will inherently produce 180 degrees and quite possibly turn an inverting amplifier into an oscillator. I say this is "possible" and this is true (unless the open loop gain has fallen below unity at the point where the extra phase shift becomes 180 degrees). So you need an open-loop gain greater than unity and an added phase shift of 180 degrees to make an op-amp into an oscillator. I read that freq. compensation is done to adjust opamp's corner freq. to single one. Frequency compensation can turn a previously unstable op-amp into a stable op-amp by smothering the open-loop bode plot with a corner frequency that is significantly lower than the original corner frequency in an attempt to ensure that open-loop gain drops below unity before the 2nd corner frequency is reached. There will still be a 2nd corner frequency (and a third) but they will be at open-loop gains lees than unity. Hence they can't turn an amplifier into an oscillator.
H: Pre-amplifier biased to one-half of power source There is a schematic of an pre-amplifier that use one-half of voltage source output biasing (T1 emitter is one-half of V1). How the resistive network was calculated to allow one-half output Q-point? Original link: http://www.zen22142.zen.co.uk/Circuits/Audio/lvpreamp.htm AI: For \$Ic2 = 500 \mu A\$ and \$Ic1 = 1mA \: \: \beta_1 = \beta_2 = 100\$ And if we assumed \$ V_{E1} = 0.5\cdot V_{CC}\$ we alredy know that \$R_{1} = \frac{Vcc- 0.5 V_{CC} - V_{BE1}}{I_{C2} + I_{B1}} = \frac{12V - 6V - 0.6V}{500 \mu A + 10 \mu A } = 10 \textrm{k}\Omega \$ $$R_{4} = \frac{0.5V}{505 \mu A} = 1\textrm{k}\Omega$$ $$R_2+R3 \approx \frac{0.5V_{CC}}{I_{C1}} = 6\textrm{k}\Omega$$ The \$T_2\$ base voltage is around \$V_{BE2}+V_{E2} = 0.5V+0.6V = 1.1V\$ So the voltage drop across \$R_3\$ should be larger or equal than this value (1.1V). \$R_3 = \frac{1.5V}{1mA} = 1.5 \textrm{k}\Omega \$ And \$ R2 = 6 \textrm{k}\Omega - 1.5 \textrm{k}\Omega = 4.7 \textrm{k}\Omega\$ And finaly \$R_5+R_7 = \frac{1.5V - 1.1V}{5 \mu A} = 82\textrm{k}\Omega\$ And we done. Now we could check the calculations by doing DC analysis. simulate this circuit – Schematic created using CircuitLab First KVL: $$I_{E2}\cdot R_4 + V_{BE2}+\frac{I_{C2}}{\beta} \cdot (R_5+R_7) = (I_{E1} -\frac{I_{C2}}{\beta}) \cdot R_3 $$ And another KVL equation is: $$V_{CC} - (I_{C2} + \frac{I_{C1}}{\beta}) \cdot R_1 - V_{BE1} =I_{E1} \cdot R_2 +(I_{E1} -\frac{I_{C2}}{\beta}) \cdot R_3$$ So, if we solve this two equation we can solve for a quiescent point.
H: High impedance in RTL Verilog I am designing a shift register. It has a control signal called RD which is asynchronous (so I can't use it inside the procedural block). The whole point is my n-1 bit shift register is value of the input if RD==1 or else it has high impedance; I am not sure how to write the assign the high impedance value because my n bit is a parameter so I can't define the number of bits. inout [n-1:0] Data; input RD; reg [n-1:0] register; //my register Example: Data = (RD==1'b1)? [n-1:0] register: 'z ; It's giving me an error. How can I define that if RD is 1, then I need to see what is there inside the register, and if it's 0 then, it should be high impedance? AI: Unless you are using SystemVerilog, you cant declare a constant like that. Instead use the replication operator. {(WIDTH){1'bz}} is a WIDTH bit wise constant of all z's. Just replace the width with however wide you need (can be a parameter). Furthermore, it should be register[n-1:0] not [n-1:0]register. The following should work: assign Data = (RD == 1'b1) ? register : {(n){1'bz}};
H: Question about the RMS value of the real input power I've the following circuit: simulate this circuit – Schematic created using CircuitLab I know that the relation between the input voltage and the input current is given by: $$\begin{cases} \text{V}_{\space\text{in}}\left(t\right)=\text{I}_{\space\text{in}}\left(t\right)\cdot\text{R}_1+\text{I}_{\space\text{in}}'\left(t\right)\cdot\text{L}_1+\text{V}_{\space\text{R}_2\space||\space\text{L}_2}\left(t\right)\\ \\ \text{I}_{\space\text{in}}'\left(t\right)=\text{V}_{\space\text{R}_2\space||\space\text{L}_2}'\left(t\right)\cdot\frac{1}{\text{R}_2}+\text{V}_{\space\text{R}_2\space||\space\text{L}_2}\left(t\right)\cdot\frac{1}{\text{L}_2} \end{cases}\tag1 $$ Now, when I measure the real input power, I will get an the RMS value of the real input power, and my question is: can I state the following: $$\text{P}_{\space\text{in RMS}}=\text{P}_{\space\text{R}_1\space\text{RMS}}+\text{P}_{\space\text{R}_2\space\text{RMS}}=\text{I}_{\space\text{in RMS}}^2\cdot\left(\text{R}_1+\text{R}_2\right)\tag2$$ Where: $$\text{I}_{\space\text{in RMS}}^2=\lim_{\text{n}\to\infty}\sqrt{\frac{1}{\text{n}}\int_0^\text{n}\left(\text{I}_{\space\text{in}}^2\left(t\right)\right)^2\space\text{d}t}=\lim_{\text{n}\to\infty}\sqrt{\frac{1}{\text{n}}\int_0^\text{n}\text{I}_{\space\text{in}}^4\left(t\right)\space\text{d}t}\tag3$$ AI: No you can't state what you have stated because \$I_{in}\$ is not exclusively shared by both resistors R1 and R2. Some of \$I_{in}\$ flows through L2. when I measure the real input power, I will get an the RMS value of the real input power When you use a wattmeter you measure average power. RMS power is a confusion.
H: Pole/Zero plot is the same for passive and active (w/ gain) bandpass filter, why? I have designed an active inverting bandpass filter, and when comparing its pole zero plot to a passive bandpass fitler with the same cutoff frequencys, the pole zero plot is exactly the same, even though the bode plots for both are different. I will run through my calculations and results below. With a gain of 2.5, Lower cutoff of 75Hz, and uppper of 31kHz, I found component values to be: \$R_1 = 2122\Omega\$, \$R_2 = 5305\Omega\$, \$C_1 = 1\mu\text{F}\$, \$C_2 = 0.967\text{nF}\$. From which I derived the transfer function and came find the following: $$H(s) = -\frac{R_2C_1s}{(1+C_1R_1s)(1+C_2R_2s)}$$ $$=> H(s) = -\frac{5.305\times 10^{-3}s}{1.088\times 10^{-8}s^2 + 2.127\times 10^{-3}s + 1}$$ Using Matlab, I got the following result (Bode and Pole Zero Map) To compare how the system responded compared to a passive bandpass filter seen below, I used derived the transfer function and placed the same values used for the other filter, thus giving the same cut-off frequencies. $$H(s) = \frac{R_2C_2s}{(1+R_2C_2s)(1+R_1C_1s+ \frac{R_2}{R_1}R_1C_1s)}$$ $$=> H(s) = \frac{5.13\times 10^{-6}s}{3.81\times 10^{-8}s^2 + 7.432\times 10^{-3}s + 1}$$ When processing this in MATLAB, I get the following plots: What is didn't expect or understand is the pole zero response. What do the poles and zeros represent in this system, and also shouldn't the opamp introduce its own poles and zeros and its is adding gain to the system. I tried reading on the pole zeros responses of low pass and high pass filters on google, and they look nothing like my results! Added to that, I was originally trying to understand at what point of the system the gain is applied too, from looking at this response, is it correct to assume the system is LowPass>Gain>HighPass? AI: To compare how the system responded compared to a passive bandpass filter seen below, I used derived the transfer function and placed the same values used for the other filter, thus giving the same cut-off frequencies. Wrong! You can't do this to compare your filters because now, there is an interaction between components that you would not get with a virtual-ground inverting op-amp configuration. It's even reflected in your bode plots - look at the mid-point frequencies - the op-amp circuit is about 10 kHz whilst the passive one is about 1 kHz. Look at the gains - they are massively different. No, you can't do this and obtain anything useful to say. What do the poles and zeros represent in this system, and also shouldn't the opamp introduce its own poles and zeros and its is adding gain to the system. A good designer will normally choose an op-amp so that its unwanted characteristics are avoided so no, most op-amp circuits do not rely significantly on the op-amps non-idealities. Pole zero tutorial for a 2nd order system: - The top three pictures are example bode plots of a 2nd order low pass filter. The bottom right is the traditional pole-zero diagram. The bottom left shows how the bode plot and pole-zero diagram are interlinked in a 3D way.
H: Lt spice reference signal for plotting Is there a way to store a signal in LT spice and then plot it on the next simulation run? I find myself either plotting two runs in an external program all too often, or even using a screen capture to compare different signals. My tek scope does this, I can store a signal and then compare it with another signal from a different capture, is there a way to get LT spice to do this? AI: This should work. Use this type of generator V2: - PWL means piecewise linear function. In other words it generates a signal based on data in a file but, first you must export the data. This site says: - To export waveform data to an ACSII text file: Click to select the waveform viewer Choose Export from the File menu. Select the traces you want exported. And... To import waveform data into LTspice IV you must attach a text file as a piecewise linear (PWL) function in a voltage or current source. EDITed so that Ali Chen can remove his upvote. There's a point here about not being too hasty with upvotes especially when I advised "read the link".
H: 433MHz receiver compatibilty Currently I have a setup where a 433MHz Transmitter sends a number from one arduino and a 433MHz receiver picks it up for another arduino. The Transmitter and receiver are of the Super Regenerative type if I'm correct (picture below). My question is can another type of 433MHz Receiver pick up the same signal? A Super Heterodyne Type? AI: The Transmitter and receiver are of the Super Regenerative type if I'm correct Nope, they're not. Super Regenerative means it is an oscillating receiver circuit. That transmitter is just a 433 MHz oscillator. Super Regenerative Transmitters do not exist. That receiver could be Super Regenerative type, the only thing I know for sure that these receiver models aren't very sensitive. I know, I own one and its not that good. Much better is a Super heterodyne receiver module, they have a chip and usually look something like this: There's also a smaller version which looks like: Some of these use the SYN470R chip. Note how there are far less passive components. This type of receiver is much better than the one from your picture. It is also only slightly more expensive and well worth that few extra cents.
H: Rx Intermodulation Implications I am looking at the product spec of NRF52832, in particular the Radio Electrical Specifications (here, the information is on page 232 of v1.4). Within it is a section called Rx Intermodulation that shows this: IMD performance, 1 Msps (3MHz, 4MHz, and 5MHz offset) -33 dBm I have talked to a few people, but the opinions as to what this means practically vary. My question is: If I have this receiver set to listen on frequency X, and I start a transmitter on frequency Y = X + 3 (and all settings allow the receiver to decode the transmitter's messages), does that mean that intermodulation will make it possible for me to receive the transmitter's messages with the received signal strength reduced by 33dB? AI: In the datasheet there's a note regarding that section RX intermodulation The note says: Wanted signal level at PIN = -64 dBm. Two interferers with equal input power are used. The interferer closest in frequency is not modulated, the other interferer is modulated equal with the wanted signal. The input power of the interferers where the sensitivity equals BER = 0.1% is presented. So then PIMD,1M IMD performance, 1 Msps, 3rd, 4th, and 5th offset channel: -33 dBm means that the power level of both interferers can have a power of up to -33 dBm. Since the wanted signal is at -64 dBm, those interferers can be 64 - 33 = 31 dB higher in power than the wanted signal. Under those conditions the BER (Bit to Error rate) is 0.1% (1 bit per 1000 bits can be wrong). More accurate: the chip's manufacturer guarantees that the BER will be 0.1% or better under these conditions. It is impossible to design for an exact BER, I know, I worked on BER and Bluetooth many years ago. You cannot draw conclusions from this for your transmitting at Y = X + 3 case because in the specified case the carrier at + 3 MHz is unmodulated. You ask about a modulated carrier which makes it a different scenario. Also there is no 33 dB in this. What is mentioned is -33 dBm so 33 dB less power than 1 milliWatt. -33 dBm is an absolute power level, the level which the interferers can be for the stated conditions for this specification point. Such a specification point might look weird, like it is some "random" condition. Well it isn't. It is usually a result of the Bluetooth specification and then translated to include antenna gain and antenna switch losses and such.
H: Electric wiring color codes: BROWN, BLUE and BLACK 31.01.2018 16:40 start / Thank you all for the answers, because this is not a standard wiring plug, i need assistance wiing the wifi adapter in the middle of cable which connect the energy master expert lcd display to the energy master expert plug. Is this correct? WIFI ADAPTER LIVE IN = Energy master expert PLUG BROWN live wire WIFI ADAPTER LIVE OUT = Energy master expert LCD DISPLAY BROWN live wire WIFI ADAPTER NEUTRAL IN = Energy master expert PLUG BLUE neutral wire WIFI ADAPTER NEUTRAL OUT = Energy master expert LCD DISPLAY BLUE neutral wire WIFI ADAPTER EARTH IN = Energy master expert PLUG BLACK wire WIFI ADAPTER EARTH OUT = Energy master expert LCD DISPLAY BLACK wire \ end 31.01.2018 16:40 31.01.2018 16:36 start / Update from customer service: **The Ground Line is bridged in the connector The meter does not require a Ground Line in the cable. Blue is the Null Line and black and brown are the Live Line Should you have further questions, please feel free to contact us via email. Yours sincerely ELV Elektronik AG Technical customer service department** \ end 31.01.2018 16:36 Which colors are Ground Line, Null Line and Live Line? The standard says the Ground line is always green-yellow. None of these wires is green-yellow. Thanks. update If black is Earth, then why the on off switch button is using the black wire? Because the cable is too short , less than 2 meters, i want to extend it. I also want to join the cables with a wifi adapter SONOFF® POW 16A 3500W DIY WIFI Wireless Long Distance APP Remote Control Switch Socket Power Monitor Current Tester For Smart Home 80-160MHz AC 90-250V Support 2G/3G/4G Network. That is why i need to know exactly which colors are Live, Null and Earth. AI: This is a combined plug/socket. The ground prongs are connected internally, no wire needed. You have to connect the blue wire to the (also internally connected) neutral screw. The brown and black wires go to the other plug/socket screws, it's live in/live out. simulate this circuit – Schematic created using CircuitLab
H: How to build distance measuring device that can only detect red light from a common red laser I want to make a distance measuring device. Using common red laser. what receiver should I use to detect the red laser light only? a more sensitive to red light. Do an Arduino UNO capable of this? AI: Physical method A quick method would be to just use a filtering lens. You will need to buy a lens specifically made to pass light at the wavelength of the laser. Common wavelength for cheap laser diodes is typically 650nm. Simply filtering by red light may still lead to a noisy signal though, depending on the environment. Software Method Another way to filter out other sources of light would be to have the laser send a specific signal. That way, the receiver can ignore everything else. Software based filtering must also be processed at fast enough speeds, which can be quite fast for short distances. The signal could be encoded in a way to reduce false-positives. One example would be to simply send a defined number of quick pulses at defined intervals. All other received signals can be rejected. The duration of the signal should be short enough so as not to collide with returning reflections. It is important to consider Maximum Unambiguous Range. This plays a role in determining the usable values for a signal's frequency and duration. The signal can only travel so far between transmission periods. The shorter the period, the shorter the possible 'certain' distance measurement. So both the desired update interval of the measurement, as well as the expected maximum distance, also play a role in what the encoded laser signal can be. Longer update periods can reduce error. Combining the physical filter with a software filter can dramatically reduce false positives, even with other distance measuring lasers in the area.
H: Is a power distribution board just a board that connects all grounds and all positives? I'm thinking of designing a PCB for a quadcopter but want to directly integrate the power distribution board (power going to the ESCs that power the rotors). I would just like to confirm that it's as simple as connecting all the grounds together and then connecting all the positive connectors (as per this link)? Or is there more to it? In addition, I do need some kind of voltage regulator so that the battery can also power the Arduino running the quadcopter - can I simply use an LM7805 voltage regulator given my battery is 12V? AI: Yes, the power distribution board you linked is, electrically, just connecting the different power and ground supplies together respectively. It appears they also use it as a mechanical frame for a quadcopter. You can use the LM7805 for your Arduino. The current draw is presumably low, so the linear regulator won't be burning much power. Just note that the linear regulator is less efficient the more current you draw from its output. If your +5V supply will be powering much other than the Arduino, you should look into using a switch mode power supply (buck), as that will be more efficient at higher current draw.
H: MOSFETs: Gate-to-Source resistor and Gate resistor value calculation I want to control several Mosfets through a microcontroller. I have a low-power application in mind, therefore the power consumption should be minimized. When do I need a Gate-Source-Resistor, and when not? How to calculate the value for minimum power consumption? Regarding the gate resistor, Does it make a difference in terms of power consumption, if I choose a 100 ohms resistor vs. a 1k ohm resistor? Added schematic (Dx-pins: digital pins from microcontroller): AI: Assumptions based on comments: The three NPN transistor symbols actually represent NMOS devices. The IO voltage used to drive the gates is 3.3V. The PNP transistor symbol actually represents a PMOS device. That is it for assumptions. The three low side switches will probably be OK. There are two things I would double-check to make sure. First, make sure the specific NMOS you use is fully turned on at 3.3V. Look for Rds(on) to be specified to an acceptably low level at Vgs of 3V or 2.7V. This should be no problem to find. For this kind of thing, look at Rds(on) not Vgs(th). Because you need to know that 3.3V will fully turn on the transistor. Vgs(th) is specified at a very low current. Second, make sure that the IO signals which control the gates of these switches are at a well defined on or off voltage at any time that the load voltage is present. Sometimes IO pins toggle at reset or during boot up, looking for hardware that is not present or something like that. Sometimes they may default to inputs with weak pullups prior to when your application code takes control of the processor. I have been burned by this before. But as long as they do not get driven high unexpectedly during bootup, I don't think there is any need to add gate-to-source resistors. And anyway, the resistors would only help the weak pullup case. If the output is driven hard to a high state, a pulldown will not help you. For the high-side switch to the regulator, you MIGHT get away with only driving the gate up to 3.3V if the highest voltage on RAW is 3.7V. However, if RAW is a Lithium Ion or Lithium polymer battery, it may be as high as 4.2V when the battery is fully charged. So in that case, the best bet is to add a gate-to-source resistor of, say, 100k, and a small NMOS (e.g., a BSS138) to pull down the gate when needed. The NMOS will be controlled by the GPIO. High means regulator on and low means regulator off. See schematic below. simulate this circuit – Schematic created using CircuitLab The BSS138 will work fine to turn on M1. PMOS M1 needs to have an acceptable Rds(on) at Vgs of 2.7V or 3V (just like the NMOS). This circuit consumes no power when the regulator is off. When the regulator is on, the only wasted power is in the 100k. You can probably go to 1M if you can't tolerate the 100k. But if the boost regulator is enabled, you will probably not be worried about 30 uA or 40 uA in the 100k resistor. It is up to you. The danger with very high resistances is that maybe the PMOS will start to turn on just a little if the NMOS drain-to-source leakage creeps up. This is more likely to be a problem at very high temperatures. Check the datasheet for the NMOS. Remember, with a 1M pullup, 1uA = 1V. The PMOS may not turn on at 1V, but it probably will turn on more than you want at 2V. One last thing. Do the NMOS devices need series gate resistors? Probably not. But for the buzzer, since you plan to PWM it, I would keep the gate resistor in there in case you determine that it is needed. If it is not needed, you can use a 0 Ohm jumper instead of a resistor, and maybe eliminate it in a future board revision.
H: Delay turn ON circuit using 555 I want to set a delay of around 5s between power supply turning ON and my circuit getting power. Here is the circuit I have come up with: Datasheet of mosfet I am using a 555 timer to set the startup delay and N-channel mosfet to supply power to the load. Most of the through hole mosfets have Rds minimum at around 10 V. Luckily I had 12V supply readily available so I am using 12 V to switch the 5V line as shown above. P4 is input and P3 is load. Is my circuit correct? Besides initial turn ON delay, I want the circuit to reset when power is disconnected (in half a min or so. if it happens within 1-2 seconds, it will be even better). To achieve this, I have put R7 (100K) discharge resistor on the capacitor which will discharge it and get the circuit in working state again. Is there a better way to do this? AI: The circuit looks good. If you want faster reset of the circuit move R7 to between 12V and GND and reduce its resistance... when the 12V is turned off C3 will discharge through D1 and the new R7
H: How can I use two TP4056 with two li-ion batteries, but single load? I have two TP4056 modules (with protection circuit), which I am using to charge two batteries respectively, off a single power line. The modules are the ones that have charge output (B+/B-), and another regular output. I'd like to be able to use both batteries (increased capacity) to power a single load, even while charging. The batteries are rechargeable 3.7V 16340 Li-ions, same make and capacity. I'm not sure how to do this in a safe manner. I don't know if it would be safe to conjoin the outputs with this module, and I would like to avoid using diodes because of the voltage drop. The question is: How can I make this work, in order to be able to use the device even while charging, and how can I connect the two batteries safely to the device in the first place? My use case is described in the image. EDIT: Look to the comments for an answer regarding more than two cells. As the original question was for two cells, I had to choose one of two valid answers. AI: Don't. Just use one TP4056 and connect both cells in parallel after balancing them first. Don't connect batteries with more than 0.2V difference in parallel as this can risk fire and explosions (excessive charging current from one to another). This will work because Lithium cells have a wide voltage range. So when connected in parallel they will self-balance. the TP4065 module includes over-discharge protection circuitry power should be taken from the out terminals of the TP4056 module, not directly from the battery.
H: truth table for D flip flop with control variables x and y So I need to build a counter with D Flip-Flop and 2 control variables x and y. XY=00 --> 0-3-2-1-0; XY=01 --> 0-1-2-3-0; XY=10 --> 0-2-3-1-0; XY=11 --> 0-1-3-2-0. How should I make the truth table? Should I put for every sequence for x and y the given controls(00,01,10,11) or I need to put for x and y 0 an 1 in order? AI: You should create a K-map with control and current state as your axis. From that you can derive the next state. $$ \begin{array}{lc|cccc} \ && \rlap{AB_\text{(current state)}} \\ & & 00 & 01 & 11 & 10 \\\hline XY & 00 & \color{blue}{1}\color{red}{1} & \color{blue}{0}\color{red}{0} & \color{blue}{1}\color{red}{0} & \color{blue}{0}\color{red}{1} \\ & 01 & \color{blue}{0}\color{red}{1} & \color{blue}{1}\color{red}{0} & \color{blue}{0}\color{red}{0} & \color{blue}{1}\color{red}{1} \\ & 11 & \color{blue}{0}\color{red}{1} & \color{blue}{1}\color{red}{1} & \color{blue}{1}\color{red}{0} & \color{blue}{0}\color{red}{0} \\ & 10 & \color{blue}{1}\color{red}{0} & \color{blue}{0}\color{red}{0} & \color{blue}{0}\color{red}{1} & \color{blue}{1}\color{red}{1} \\ \end{array} \\ \begin{align} \color{blue}{a}_\text{(next state)} & = \bar{Y}\bar{A}\bar{B} + Y\bar{A}B + \bar{X}\bar{Y}AB + \cdots_\color{blue}{\text{(other blue groupings)}} \\ \color{red}{b}_\text{(next state)} & = XY\bar{A} + X\bar{Y}A + \cdots_\color{red}{\text{(other red groupings)}} \\ \end{align} $$ Note: To ensure I'm not doing your homework, the equations are intentionally incomplete and the k-map entries may/may-not be accurate. If finding the groups from from a double variable is challenging, split into seperate k-maps (one for red entries, the other blue).
H: Should I use MBED of ARM or native platform of microcontroller suppliers? I got a STM32F429I-DISC1 development board from STMicoelectronics and I know MBED support it. So if I want to learn about ARM and develop a commercial product in the future, should I use MBED platform or native platform of STMicoelectronics? AI: It depends. Do you want easy portability between ARM microcontrollers of different product families or even manufacturers? Do you wish to develop IoT applications and want simple internet connectivity? Would you like a vast selection of portable, easy to use libraries? Use mbed. Do you despise having to reinvent the wheel, but don't want the overhead of mbed? Do you dislike developing code with a web app? Is the ability to switch to a different manufacturer superfluous to you, but portability within the same product family is still required? Use the native libraries from ST (HAL or SPL). Do you want full control over every little detail of the hardware and software? Do you want to optimize your code to fit in the cheapest microcontroller, execute super fast, or use as little power as possible? Do you hate having to deal with over-abstracted, slow and inflexible libraries and would rather do it yourself? Program bare metal: include just CMSIS and register definitions, study the datasheet with care, and write your own code.
H: MFB Analog filter with Reference voltage transfer function I am looking for the transfer function of the filter below : This is a multiple feedback second order analog filter, I was already able to calculate the transfer function of the filter without VRef in the positive pin of the op-amp, but I couldn't find it with Vref. Can anyone give me the transfer function of this filter or maybe a tip on how to calculate it ? Also how does it affect the offset is it added directly to the output? AI: A DC bias voltage Vref is applied (instead of grounding the non-inv. input node) to establish a suitable bias point at the opamp output that is NOT zero volts. This is necessary in case of single supply only. On the other hand, the transfer function of the whole circuit concerns the ac response only - independent on the supply voltage and the chosen dc bias point. Hence, there is no influence of Vref on the transfer function.
H: How to test the output current of a LED driver without a LED lamp? I have very basic knowledge of electronics but I have to design a circuit to control this LED driver (Constant Voltage + Constant Current LED Driver). I already have a prototype and I need to test it, but I do not have a 20V LED lamp of 40W to check that the control is done correctly. That is, the output current is controlled by PWM or by 1-10V. Instead of using the LED lamp as a load, could a 10-ohm 25W resistor be used? Or, is it enough to connect the ammeter directly to the output of the LED driver? Thank you. AI: An ammeter will need a load in order to measure the current flowing through the circuit. The Ammeter alone will not suffice. You also answered your own question: Instead of using the LED lamp as a load, could a 10-ohm 25W resistor be used? Seeing as you want a 40W load, then no, this will not do. You would need a resistor that can take the 40W. If you want to simulate a 40W load, then start again from the beginning. Ohms law is your friend here. Find the resistor you require, and just make sure you have one that can take the current/power. R = V/I P = I^2*R or V^2/R or I*V Use those equations to find a resistor you want to use, work out the maximum power it would be handling, then find a component to match your answers. Fit the component as a dummy load, then connect your ammeter in series with the circuit to measure the current. Or, to make it much easier on yourself, just get yourself the 40W LED bulb you require...... they are pretty common.
H: Why does speed of DC motor increase when flux is reduced? I understand that from KVL e = v -IaRa e=k(flux)speed speed=(constant)*(V-IaRa)/flux But physically what causes the speed to increase? What force causes the rotor to accelerate? In fact, physically, the rotor is moving because of field flux interacting with armature current's flux. So physically if I reduce one flux, the speed must reduce, since I have reduced the cause of motion as Toque is also depended on Flux.(F=Bil) AI: The instant flux is reduced, the back emf reduces and this causes the armature to increase current. More current means a higher driving torque and this accelerates the armature to run at a higher speed until the speed equation is in balance again. If there is a significant mechanical load this may not happen of course.
H: How to create a negative voltage supply? I am trying to incorporate a LEM LAH 100-P current sensor into a project. It requires a dual polarity voltage supply between +/- 12 and 15 volts. What is the best way to implement it? Digi-key has an article which references many other application notes. But I don't have knowledge of power electronics which makes adapting them difficult. This application note seems promising but components used are for an output of -3.3 to -5.5 volts (part at digikey and datasheet). Figure from the application note: What components could be used to create a negative supply of 12 to 15 volts? Any suggestions welcomed. AI: As it already has been stated in different comments to the original question, there are different possibilities. You could use in increasing order of complexity: a DC/DC power supply module (typically a plug and play 'block') a charge pump (essentially a controller IC with capacitors) an inverting switching power supply (essentially a controller IC with inductor) an added winding on an existing AC/DC power supply Since you ask this question, I would suggest you select the first option. Not the cheapest but it is the most fail safe. For the first 3 options, you can easily use the selection criteria at the large component distributors (Mouser, Farnell, Digi-Key, RS) and select the appropriate input and output voltage range and output current requirement.
H: Power up sequence for AD5724 The datasheet says: POWER-UP SEQUENCE Because the DAC output voltage is controlled by the voltage monitor and control block (see Figure 42), it is important to power the DVCC pin before applying any voltage to the AVDD and AVSS pins; otherwise, the G1 and G2 transmission gates are at an undefined state. The ideal power-up sequence is in the following order: GND, SIG_GND, DAC_GND, DVCC, AVDD, AVSS, and then the digital inputs. The relative order of powering AVDD and AVSS is not important, provided that they are powered up after DVCC. I can't see in the document any information about how much time after DVCC the other rails should rise up. My question is: it's ok to power all the rails together (+12V and +3.3V come from LDO, -12V from ICL76660A) minimizing the time when G1 and G2 are in an undefined states, or it's mandatory to delay AVCC/AVSS? In such a case, what's the minimum delay needed? Figure 43 shows an external circuit to delay AVdd but the text says: C1, R2, and R3 are the main components that dictates the delay from DVCC enable to AVDD. Adjust the values according for the desired delay. Well, I really haven't a desired delay... they seem to have one! AI: This is to prevent the dreaded CMOS SCR latchup effect from the inherent PNPN substrate. (search if you dont know about this) Comparing the slew rate of Vdd vs speed of transistors Consider anything more than 1us after PS slew rate with Vdd = OK .
H: Logic-Level Converter: How high is the power consumption? Regarding the following schematic for a bi-directional logic level converter, can someone explain how to calculate power consumption? Can I use pullup resistors with higher values, e.g. 1M? I need such a converter in my circuit, but the device is battery operated and should therefore consume as little power as possible. LV1 is connected to a digital pin of an Arduino Pro Mini 3.3V. LV is connected to 3.3V. HV1 is connected to a Water Flow Sensor. HV is connected to 5V. AI: The power consumption is easy to estimate: P=U²/R. As it stands currently, R3 will burn 1.1mW and R4 2.5mW. The MOSFET consumption can be neglected. But this will be consumed only when the logic level is low. If the logic level is high, this circuit consumes virtually no power at all. Therefore, you must also consider the logic level duty cycle of the lines. And if, as I suspect, this is used to translate I2C bus lines (which standby at logic level high), this simply depends on the length of the frames and the rate at which you send them. If you send one frame every other second, you can probably just ignore the power consumption of this altogether. If you keep sending frames continuously (which is bad), you may consider 50% duty cycle (divide the power by two). So the main power consumption factor here is not the resistor value but the amount of communication needed between both devices. I would therefore really try to reduce it as much as I can, eventually even by modifying the protocol if possible. Now, if everything has been done at this level and you still need to reduce power consumption, you can indeed use higher value resistors. But the highest value you can use depend on the kind of bus (which you didn't specify) and the devices specifications. If it is an I2C bus, you can use this application note from TI to size the resistor (but if it is something else, the information given there will still be useful).
H: Implementing a voltage sensor with voltage dividing resistors I am implementing a power storage system. The voltage of the storage device must be sensed in order to direct current to and from the storage device. The storage device voltage will vary between 0 and 60 volts, and the system's current will be up to 60 A. The micro-controller operates at 3.3 V. To map 0 to 60 volts onto 0 to 3.3 volts: 60 V x 10 kΩ / (10 kΩ + 170 kΩ) = 3.33 volts The current through the resistors is 0.333 mA. (Corrected from original 3.33) Would this work as expected. Is there something I am neglecting? AI: Your system will work but there are a few of things you need to be mindful of. You are dividing the 60V to the max that the micro can handle. This means if the storage device is ever over 60V you will be presenting too high a voltage to the micro and your ADC will not be able to detect it. You would be better to use something like 10/190 so you present 3V at 60V so you can allow 10% overshoot on the 60V. The numbers also are easier to work with. As WhatRoughBeast pointed out, the divider will always drain some current from the source, be that only about 333uA. You may want to consider adding some form of switching circuit so you only attach the divider to the 60V when required. If you are using 1% resistors, the possible measurement error through the divider is +-2%. That may or may not be a problem for your application, however it is a problem if you use 10/170 since +2% will make the presented voltage over the magic 3.3V. Most micros these days have the option to use a stable internal voltage reference for the ADC instead of the rail voltage. If yours does (you did not specify the micro) you should scale the divider to use that level instead, again with that 10% overhead. That will remove errors caused by whatever the 3.3V supply is doing and get you more temperature stability. Adding some small capacitance to the division point will also help make the system less sensitive to noise, both ambient, and on the 60V rail. There is a balance here though, do not make it too large or your sample time needs to extend to cope with the slower step response time. You should also consider splitting the top resistor to make it two in series. This will provide you with a little extra isolation from the 60V, split the power dissipated, and remove the potential of a single fault short in the resistor blowing up the micro. A little extra protection would not hurt either. Finally, this design is high impedance. That means it is affected by the input impedance of the ADC. Using a voltage follower buffer between the divider and the ADC is prudent.
H: How does an ARM processor in thumb state execute 32-bit values? What I understand is, the ARM mode can execute 32-bit of instructions and Thumb mode can execute 16-bit of instructions. For instance, Here is the ARM instructions set: And Thumb instructions set: From these both instruction set tables, please see ADC mnemonic that describes add two 32-bit values and carry. So, 32-bit is common in both the modes. What I didn't understand is, how a thumb mode which can execute 16-bit is able to execute 32-bit value? I referred the other books (which I could) and in those books also same description is given. Please explain me this concept/correct me which I misunderstood. AI: The data bus width of the processor has nothing to do with the length of the instructions. The ARM processor can manipulate 32 bit values because it is a 32-bit processor, whatever mode it is running in (Thumb or ARM). It just means its registers are 32 bits wide. And the registers don't change when you switch mode. Now, it doesn't have any implication on the length of the instructions. The instructions could be encoded in any length. The x86, for example, uses 8-bit instructions but is also able to work on 32 bit values. For ARM, this is what changes when you switch to/from ARM and thumb modes. For example, the instruction MOV R0, R1 (copy the contents of the 32-bit R1 register to the R0 register) is encoded in the following way: E1A00001 for ARM (32 bit) 4608 for Thumb (16-bit) But the processor, in the end, will perform exactly the same operation, and it will do it on 32-bit wide data, whatever the mode. This ability to switch modes simply allows you to decide on the compromise between code density and flexibility. You can pack more instructions in a kB of code with 16-bit instructions, but the 32 bit instructions are more flexible (they offer more features and you can do more with a single instruction).
H: If a datasheet doesn't state I2C address pins have pull-up/down resistors, does that mean I must connect them? I've got a handful of DS75S+ temperature sensors. They're SO-8 and I'm going to solder them onto prototyping boards for use with a Raspberry Pi. I want to set the addresses sequentially so I have the option of using as many as I like on one Pi (up to 8, but I only have 5). The datasheet says nothing about internal pull-up/down resistors on the address inputs, so does this mean they're floating? Tying them high is trivial, tying them low might be a little messy. But I don't want to set it floating only to find later that I've caused a problem (especially as, once tested, I'll probably pot them in epoxy). The "Detailed pin description" says "7 A0 Address input pin." etc., there's no typical circuit and the block diagram has nothing of any help. I've seen how do you typically tie the address pins and WP pin of I2C device? Do these lines need a pull-up/down resistor or just tied directly (closed) which has a helpful answer that unfortunately starts "Completely dependent on the specific IC you are using, and would be listed in the data sheet. " which isn't true here AI: I had a quick check of the datasheet (No I did not read the complete sheet but looked at every instance of the text 'address'). The data sheet does not mention anything like pull-ups/downs or defaults thus you must assume the worst case: you have to tie each pin either high or low.
H: common mode choke on DC line usually common mode chokes are used on AC lines to take care of common mode noise. If I am using a CMC on a DC line ..would the inductance degrade ? AI: A common mode choke on a DC line will normally be passing some DC current through one winding and the same (but in reverse) DC current back through the other winding thus it is allowing power through the choke with very little hindrance. Under these circumstances, the magnetic flux in the core is cancelled and therefore there can be very little saturation of the core that might lead to a reduction in inductance. It's the same scenario for AC - at any point in the alternating cycle, the current passing into one winding is the same as the current being returned by the load and therefore it is passing back through the (reversed) second winding. This offers a very small impedance to normal power currents passing to the load. In both these cases flux largely cancels but, in the case of a common mode current, flux doesn't cancel hence the choke "offers" a significant impedance and this attenuates the common-mode current up to the point at which saturation starts to occur. Short story: AC and DC current being passed to a load do not affect common-mode inductance.
H: p-channel MOSFET switch I want to use a MOSFET as a switch driven by my microcomputer. The original circuit using N-channel MOSFET is on the left side. Honestly, I do not understand the choice of the IRLZ44. The circuit is designed for Arduino, which has 5V logic. Which means that for GPIO=True=5V, MOSFET opens and lets the current into the load. However I have two problems: I am using Raspberry Pi, which has 3.3V logic. According to available information 3.3V is not enough to fully open MOSFET. I want my load to be connected to the ground (I had to do some voltage measurements). I know enough electronics to assume that using P-channel MOSFET, as shown on the right side, might solve both of my problems at one stroke. For GPIO=False=0V MOSFET will be fully open, while GPIO=True=3.3V puts -1.7V on the MOSFET gate and practically close it down. If that does not suffice, I could also put GPIO into the listening mode and therefore pull the MOSFET gate to 5V. Could you please tell me if the idea will work? And what IRLZ44-equivalent P-channel MOSFET should I use? AI: First off, the rules of the site do state not to ask for recommendations of products, so I will skip that bit. Just read the datasheets as everything will be explained in there. If there is something on a datasheet you do not understand, please post a separate question about it. Now, on to your problem. From what I think you are trying to do, you may find you might not be able to toggle the PMOSFET fully, or you may have some difficulty unless you understand the datasheets properly. What may be an easier idea, is to use a MOSFET pair, where you toggle an N-channel MOSFET to pull the gate of the P-channel to 0V, like so: simulate this circuit – Schematic created using CircuitLab I have used this circuit a few times with no issues. However, as always, make sure to read the datasheets to make sure your components are able to do what you want. You don't always have to use the same components as shown in example circuits. Base your components on your own needs. Example circuits are great for learning how things work, but are not always the most practical. When it comes to designing your own circuit based off an example, you should always consider your own needs, and base your component choice off of that, rather than just use whatever the example has.
H: Convention regarding BJT collector current calculations Using the equation Ic=Is*e^(Vbe/Vt) yields impossible values for me. I am using Vbe = .7v & Vt = .025 A. Is in circuit is 3.4mA. I am trying to determine Ic as an initial value for analyzing the rest of the circuit, as the rest of the values are unknown. I assume I am missing a convention associated with the magnitude of Vt or Vbe. Is this the case? Example circuit: simulate this circuit – Schematic created using CircuitLab AI: You have stated Is = 3.4mA. I notice that if the transistor is saturated, then 24v is dropped across R1+R2, and the current is 3.4mA. Although Is is called the 'saturation current', it does not mean 'the current that flows in a circuit when the transistor is saturated'. It is a property of the transistor, and usually has a value down in the pA or fA. It's the current drawn by a diode junction when it's reverse biassed. Attempting to use the wrong value of Is for the wrong purpose will not help you understand a transistor circuit. In the absence of any other data (you have told us everything that's relevant, haven't you?), you have to make some judicious assumptions, and in this order. There is an emitter resistor R2. So the application is likely to be a linear circuit, not a saturating switch, which would have the emitter tied directly to ground. So as it's a linear circuit, you'd expect the collector voltage to be 'sensible', biassed to about half the available voltage swing. If we ignore the base current (a reasonable assumption), then the R1 voltage drop will be 2.5x the R2 drop. Arrange for 12v across these two resistors, and 12v across the transistor. That's about 3v on R2, for a current of 1.5mA. That current will drop about 7.5v across R1. As there's 3v on the emitter, you'll want 3.7v on the base. Any voltage close to this will result in a 'reasonable' bias point.
H: Categorizing ICs as digital or analog I'm new to electronics and trying to understand the difference between analog and digital ICs. My understand of analog and digital signals is up to scratch but I am struggling to categorize ICs into either analog or digital based on their function. After digging around in datasheets and looking around online I am becoming more confused! I'm hoping someone can point me in the right direction or re-align my thinking so I can figure out what these ICs are without having to ask about individual components. My confusion mainly comes from looking at functional block diagrams in datasheets. For example, if we look at this Half-Bridge Gate Driver, http://www.ti.com/lit/ds/symlink/ucc27211a.pdf the schematic on pg 13 shows a diagram containing logic gates (digital signals) and what seems to be amplifiers of some sort(?), a diode, Schmitt triggers buffers (?), which are analogue components, so is this IC a digital or analog IC? There are also some functional blocks; level shift and UVLO, I'm not even sure where to start with these! Also, I seem to have been convinced that transistors are analogue components, but these are then use to create a logic gate. The line between analog and digital is becoming ever more blurry, any help is greatly appreciated. Thanks in advance. AI: It's blurry because it is not black and white, or rather there is a huge grey scale between black and white that sometimes can be considered analog and other times can be considered digital depending on the application. In reality everything, except perhaps at the atomic level, is actually analog. Digital circuits are actually analog comparators, usually with some hysteresis, that makes them flip their outputs depending on what the input voltage or current does. Once you have such a "gate" you can combine them in an infinite number of ways to produce more complex logic functions. Some devices, like simple latches etc are purely logic gates. Others are hybrids that contain both "digital" and analog circuits. A simple transistor can be used either as a current amplifier, or it can be used as a switch. Or more accurately, a transistor has three states, Saturated (ON), Reverse Biased (OFF), and Linear (neither on nor off). As such, which mode you are using is dictated by how it is driven and arranged. When you are using it as a switch, using the On and Off states, the transistor always has to goes through that linear region during the transition. So yes, the definition is blurry. As for your indicated device. It falls into that hybrid category. It takes digital inputs and outputs analog voltages based on those inputs and what the load is doing. In the end, as Andy pointed out, it usually doesn't matter what you call it. There are some exceptions though. A good example of that is a multiplexer, a device that channels inputs to outputs based on come control signals. A digital multiplexer is very different from an ANALOG multiplexer.
H: How would you calculate the input and output impedance of this amplifier? How would you calculate the input and output impedance of this amplifier? I have done a lot of research and found so many conflicting pieces of information. Not sure what is right. C1 = 10Uf, C2 = 1000Uf R1 = 50KΩ, R2 = 7.5KΩ, R3 = 820Ω, R4 = 100Ω, R5 = R6 = 10Ω D1 = D2 = 1N4148 Q1 = 2N3904, Q2 = TIP31C, Q3 = TIP32C RL = 8Ω Or 16Ω Supply Voltage = 12V Thanks AI: If Q1 was an NPN transistor (as I suspect it should be) the input impedance at mid-band (no appreciable extra impedance due to C1) is approximately R1 || R2 and, if you wanted to be pedantic you would have that impedance in parallel with \$\beta\cdot\$R4 where \$\beta\$ is the gain of Q1. The output impedance (mid band hence ignoring C2) is dependant on the current flowing through Q2 and Q3. If that current is small then the internal \$r_E\$ might be higher than R5 or R6 and will sway things. If you ignored \$r_E\$ then on positive half cycles the output impedance is R5 and on negative half-cycles it is R6. If R = R5 = R6 then the output impedance is approximately R. The above assumes that RL isn't connected. I have also assumed that the circuit is for an audio amplifier with mid-band frequency around 1 kHz.
H: What kind of signal processing circuitry do I need to generate a line level output on an Arduino? I am trying to create an Arduino based music synthesizer. How can I safely generate line level output (+/- 2 volts centered at zero, with a frequency range from 20Hz-20KHz) from my Arduino using a minimal number of components? This is what I imagine the flow will look like, but please correct me if this is wrong. Generate a sine wave tone using a DAC (I'm doing this already using MCP4725) Level shift the signal -2.5 volts and lower gain To perform level shifting I think I need to generate a negative 5 volts to supply to a dual supply op amp, but I'm not sure if this is correct There is a lot of confusing/mixed information on line level requirements. I hooked up the output jack of my macbook pro to an oscilloscope and generated a square wave. It looks like the macbook pro puts out -2 to 2 volts, so I think this is where my target output voltage should be. Edit: My target output voltage is 1.25VRMS, since I am using a QSC PLX3602 amplifier with an input sensitivity of 1.25VRMS. Some questions: How many milliamps do I need to be able to source for line level Given that I am going to be outputting square waves (which can sometimes damage speakers), is there anything I should keep in mind? I am planning on matching my amplifiers RMS wattage rating with the speakers RMS rating. Do square waves produce higher current than RMS? Can anyone recommend a schematic or components I can use to accomplish the signal conditioning needed to do this safely/without damaging audio equipment? AI: To perform level shifting I think I need to generate a negative 5 volts to supply to a dual supply op amp, but I'm not sure if this is correct. It's much simpler than that. Just add a DC blocking capacitor in series with the output. We'll calculate the value in a moment. It looks like the macbook pro puts out -2 to 2 volts, so I think this is where my target output voltage should be. See Wikipedia's Line leve for more on this but that will be plenty. How many milliamps do I need to be able to source for line level? Use Ohm's law. You'll need to find the input impedance of what you are driving but it's usually > 10k so current drain won't be a problem. Given that I am going to be outputting square waves (which can sometimes damage speakers), is there anything I should keep in mind? I am planning on matching my amplifiers RMS wattage rating with the speakers RMS rating. Do square waves produce higher current than RMS? You're getting mixed up. An RMS measurement allows comparison between different waveforms. If they have the same RMS value then they will have the same heating effect or power as each other or a DC current of the same value. The problem with squarewaves is that they are high in harmonic content and, theoretically, these continue up to infinity. You can get an understanding of this from the Fourier transform of a squarish wave. Figure 1. Fourier transform from time domain to frequency domain. Source: unknown to me. Can anyone recommend a schematic or components I can use to accomplish the signal conditioning needed to do this safely/without damaging audio equipment? simulate this circuit – Schematic created using CircuitLab The capacitor and amplifier input will form a high-pass filter. (Think: it blocks DC which is 0 Hz.) The cut-off value is determined by \$ f_c = \frac {1}{2 \pi RC} \$. You can read more and find a calculator on Learning Electronics.
H: How can I tell from a datasheet if a thermocouple amplifier IC will work with a grounded thermocouple? There are several different solution available for thermocouple amplification such as the MAX31856, AD8495, and LTC2983. Some of these support grounded thermocouples (AD8495) and some don't (MAX31856). This is not explicitly stated in the datasheets. How can I tell? My best guess is that I needs to look for a common-mode voltage rejection down to 0V. Is this correct? Below is a picture of a "grounded thermocouple" configuration. AI: Typically you look for this (MAXIM part): - Input Common-Mode Range 0.5 to 1.4 V It doesn't go down to (or below) 0 volts hence it's no good for a ground connected TC. The AD part says this: - Input Voltage Range −VS – 0.2 to +VS – 1.6 And this would be suitable to go 0.2 volts below the negative rail on the chip. And for the LT part it says a similar story: - Common Mode Input Range –0.05 to VDD – 0.3 So you have: - Input Common-Mode Range Input Voltage Range Common Mode Input Range As the main phrases to look for when choosing a chip that can handle grounded TC inputs.
H: BJT Constant current LED driver I need to build a constant current driver for a LED with a forward voltage of 3.4V and 350mA maximum current. The driver will be controlled by a PWM signal from a 3.3V MCU. Reading this post and doing the calculations using my system specs, I came up with the following circuit: simulate this circuit – Schematic created using CircuitLab I had to choose a 12V power supply due to the high forward voltage of the LED, which resulted in a minimum power supply voltage of 6.5V. Therefore I couldn't use a 5V power supply. However, I'm concerned about the power dissipation on the transistor. If my calculations are right, the power dissipated across it would be (neglecting base current and considering Vbe=0.7V): And the BCP56 can only dissipate a maximum of 1.35W with properly sized mounting pad in the PCB. First of all I would like to know if the my calculations are right and, in case they are, what would be a good solution. The only two options I can think of are either picking a beefier transistor that can dissipate more power or reducing the power supply voltage, although I like the idea of using as 12V power supply since it's easier to find locally. Furthermore, is BJT a good solution for this type of driver or changing for a MOSFET based driver a more suitable option? AI: I'd be using 5V and a MOSFET here to limit the current required from the GPIO and coupling it in a typical current limiter as shown below. R1 and Vbe or Q1 roughly set the current limit. If you need it more accurate than that you either need a pot in there or a more active circuit. simulate this circuit – Schematic created using CircuitLab The gate threshold of the MOSFET needs to be under or close to 1V. Power lost to the MOSFET is about 300mW and R1 is 215mW.
H: Creating circuit diagrams from verbal descriptions. (Digital Logic) I have these two descriptions of systems: And I want to answer them correctly (comes from a book with no solutions). For #9, isn't that just going to be an AND gate? We all know the truth table for that. But #10 is a little more difficult. I'm not exactly sure how to go about that one. I've looked through my table of gates and their truth tables, but none of them seem to be what I'm looking for. Thanks for any help! Truth table for #10 (for OP to fill out). [Transistor] A | B | Z ---+---+--- 0 | 0 | 0 0 | 1 | 1 1 | 0 | 1 1 | 1 | 1 EDIT: So it is just an OR gate? When no switches are on then the light is off; if either switch is on then the light is also on; when both are on then the light should be on. That's how I read it at least and it makes sense. Thanks for you help transistor. EDIT 2: A | B | Z ---+---+--- 0 | 0 | 0 0 | 1 | 1 1 | 0 | 1 1 | 1 | 0 So it is an XOR. AI: Perhaps you live in a land of single-story houses and are not familiar with stairway light switches. The idea is that you can change the state of the light by switching either switch. Here's how they're wired. simulate this circuit – Schematic created using CircuitLab Figure 1. Stairway lighting circuit. Note that the logic can be inverted if the wires are crossed over between the switches so there are two possible solutions. Now back to your truth table ...
H: DC analysis of common-drain NMOS amplifier The DC analysis is regarding the amplifier calculations, but that is not relevant to the topic. The equivalent DC circuit of the amplifier is: The known values are: \$R_{g1}=300\text{ k}\Omega\$, \$R_{g2}=200\text{ k}\Omega\$, \$R_s=100\text{ k}\Omega\$, \$k_n=25\ \mu\text{A/V}^2\$, \$\lambda=0.02\text{ V}^{-1}\$, \$V_{tn}=1\text{ V}\$, \$V_{dd}=10\text{ V}\$. Now the problem is to find the bias point (drain current - \$I_D\$, voltage \$V_{GS}\$ and voltage \$V_{DS}\$). First, I calculated the gate voltage as: $$V_G=\frac{R_{g2}}{R_{g1}+R_{g2}}V_{dd}=4\text{ V}$$ Then, I assumed that the transistor is operating in saturation mode, and set up these equations: $$I_D=k_n(V_{GS}-V_{tn})^2(1+\lambda V_{DS})$$ $$V_G=V_{GS}+R_s I_D$$ $$V_{dd}-V_{DS}-R_s I_D=0$$ The problem is, I cannot solve those equations, as there always seems to be one element missing. Any ideas on how to solve this? AI: For quick hand analysis, I would personally not include the impact channel length modulation. Knowing, $$ V_{GS} = V_G - V_S\;\;\;\&\;\;\; V_{DS} = V_{dd} - V_S $$ and that the drain current equals, $$ I_D=k_n(V_{GS}-Vtn)^2(1+\lambda V_{DS}) $$ Since gate current of Q1 is zero the drain current is also, $$ I_D = \dfrac{V_S}{R_S} $$ Put it all together as, $$ \dfrac{V_S}{R_S} = k_n(V_G - V_S-Vtn)^2(1+\lambda (V_{dd} - V_S)) $$ and solve for \$V_S\$.
H: How to calculate Fundamental Input Power Factor I am really confused with the concept of fundamental input factor. What was taught to me in class was the below formula IPF(Input Power Factor) = IDF(Input Displacement Factor) x CDF (Current Distortion Factor) Input Displacement Factor was defined by my instructor as the cosine of angle between the phase voltage and line current So, I tried to apply this 2 things to this problem. I did the same steps as given in the solution and got Vo = 531.19V and cosu = 0.967. [Here u is the overlap angle] Now I tried applying the formula taught in the class and got wrong answer. Steps are shown below. So, source line current (iR) leads the line voltage (vRY) by (0.5u+30). So, it will lead the phase voltage by (0.5u+60). Please help me understand this problem. AI: The fundamental or displacement power factor is the real power / (fundamental RMS voltage X fundamental RMS current). It would also be the cos of the angle between the fundamental voltage and current waveforms if you can figure out the angle. The total power factor is the real power / (total RMS voltage X total RMS current). The total rms current and voltage values would include the harmonics. There is a relationship between the distortion factor, fundamental power factor and total power factor. That depends on the definition of distortion factor. I believe there may be more than one definition. The total power factor is often called the true power factor, but I believe that is misleading. The displacement power factor is also related to he ratio of the DC output voltage to the peak input voltage taking into account the voltage drop across the diodes and the input lone reactors.
H: LED dimming and modulating with two PWM signals I have a need for an LED, driven by a constant current source, that can be smoothly dimmed and also semi-rapidly (say 2.5kHz) modulated for later bandpass filtering. I've designed a possible circuit, given below. The constant current source and PWM-dimming were already familiar to me but I seem to have found the switching solution through beginner's luck. The circuit seems to work well in Spice and on a breadboard. Is there a compelling reason to edit or reject this circuit for the given purpose? Also, could someone help me understand how the switching via Q2 works? Thanks AI: Your circuit works because Q2 is injecting a current that is uncontrollable by means of U2 (your error amplifier). When Q2 switches on, the emitter voltage of Q1 rises (limiting its drive current). As the feedback voltage across R4 rises due to Q2 being switched on, the differential error signal as seen by U2 becomes negative. U2 drives its output low trying to lower the voltage across R4. But, since Q1 can't sink current out of that node, U2 loses linear regulation and its output sits railed at ground. When Q2 switches off, U2 comes back into regulation. There isn't much benefit to this topology, as U2 has to recover every switching cycle. You could equivalently just chop the input reference level going to the positive input of U2. Or, you could also bypass D1 with a PNP transistor.
H: Current limiter for small 12V DC motor I have a situation where a small DC motor is moving the flap from one side to the other. The flap can be in two positions but without limit switch. Change of position lasts about 3 seconds. The problem occurs when the motor reaches its position, but still has supply. After some time it can easily break gears. So I want to limit the current for the motor in at least one direction. I have been thinking about this simple current limiter, but is it a problem when it gets reversed supply? Is it ok to solve it with one diode which will conduct most of the current in that situation? AI: Sorry it is now past 1 AM here and I am getting tired. I just wanted to give you a heads-up what the idea was. The bridge rectifier makes sure that the current through the circuit always goes the same way, no matter which way the motor turns. simulate this circuit – Schematic created using CircuitLab
H: Buck converter without load - is it dangerous in long term? As a followup to my previous questions (Q1 - circuit diagram is here, Q2). I disassembled the LED bulbs, cleaned and resoldered the components on the DC-DC converter boards, replaced caps, and chandelier-assembly seems to work well. Background: Originally LED bulb is an assembly comprising of both DC-DC converter and LEDs, and (as I now clearly see) is not serviceable. If any part explodes within the bulb, manufacturer does not expect that bulb's chassis explodes (however while plastic it is made of is not flammable, gases are having hard times getting out of it). Now, when I separated converter and LEDs, I must ensure that any of them do not fail making fire or smoke and dirt. The issue: when I remove the bulb (which is having LEDs only now), converter remains without load. I was lucky to notice that in this case voltage at its output goes at approx 85 V, and output capacitor rated for 50 V heated and was about to explode. I have two same model bulbs with different converters: BP2832A-based (board revision 1.0, English, Chinese) and DU8671-based (board revision 1.1, Chinese). While BP2832A says nothing about nature of its output, DU8671 has much better datasheet which says that output of the circuit based on it should be 40 Vdc ~ 80 Vdc on page 5, above the circuit diagram (more or less same reading I get without load). I suspect (maybe wrongly) that BP2832A should have the same range, as long as I also measure its output to be about 85 V without load. Question: It seems that buck converter is not designed to work without load, right? And resistor of 30 kOhm at output does not make a difference. I suspect that converter tries to reach nominal current set by Rcs resistor, within the defined range (40-80V), and if it does not reach this current, it stays with the clock frequency at the upper limit of voltage level. Is it OK to run converter in this mode? Let's say I will up-rate output cap from 50 V to 100 V so that it withstands max voltage on the output without heating and explosion, but in general, like in my case - if bulb containing LEDs is removed from the socket, can this no-load mode be harmful for buck converter chip in long term, flyback diode and choke? How much current device is expected to consume in this mode? What are the pitfalls from your experience? Question 2: How this voltage 40Vdc ~ 80Vdc range in DU8671 for circuit of page 5 is calculated? Is it based on Tleb and Tdelay timings? Or Toff/Ton? P.S. Bulb's circuit, based on DU8671, is having two parallel 3.3 Ohm resistors on its CS pin, abd my current measurements (120 mA) matches result of formula on page 5. However another bulb based on BP2832A is having the same output current (measured) with almost the same setting, however its formula is missing 2 in the denominator! found the answer, BP2832A's has two formulas with one Ipk and another Iled which is Ipk/2... Update: here're results of the project Board from the bottom, power routing Board from the top And within chandelier assembly Bulbs (with LEDs only) are still heating, but I would say they are ~80 C. Central hub heats a little, but still can be touched by the hand. The design still needs to be tested for durability though. AI: Usually a DC-DC converter for an LED string will be a current regulator, not a voltage regulator. A current regulating converter will use feedback to adjust the switching duty cycle up or down until the load current matches a setpoint. If the load (LED) is removed then the converter will measure no load current and just keep increasing the output voltage. Running a current regulator without a load typically won't damage it, but the voltage will go up until the over-voltage threshold is reached. You should be OK as long as you rate the output capacitors avove the over-voltage setpoint. Also, the over-voltage setpoint is usually adjustable on most converter chips by changing a resistor, so you could try reducing the setpoint rather than increasing the capacitor ratings.
H: Passive RC Lowpass Filter Effects on DC Connecting passive low pass filters in series and with a load, reduces the dc voltage. Maybe it is very simple but would you explain a little bit to me about the low pass filter's effects on dc. Filters don't reduce dc without a load by the way. Source 12VDC(with little ripple) Load 20 Ohm. Filter section(RC lowpass passive) If i take the output of the first filter, 500 Ohm 1mF (cutoff around 0.3Hz)---Out ~7VDC--- If i take the output of the second filter also current draw reduces a lot. 500 Ohm 1mF ---Out ~0.20VDC--- etc. Thank you. AI: The load voltage will drop by the output_current x filter_resistor. Its just ohms law. So in your case, when using one filter stage, it will drop by 500mV for every 1mA of load current. So in your case, when using two filter stages, it will drop by 1V for every 1mA of load current.
H: Power over Ethernet: AC? All commercial POE products I have seen feature DC current to power endpoint devices. What are the reason(s) why AC is not used for POE products? AI: AC is only 70.7% as efficient as DC at transferring power at a given peak Voltage. That is, to provide the same power as 48V DC, you would need 48/.707=68 V peak-to-peak. Sure, you could increase the voltage, but the PoE voltage is specified at 48V to avoid electrical shock, as dry human skin begins to break down and conduct at about 48V. This is why the plain old telephone system (POTS) over Copper wire was 48V as well. Another reason is that you don't want AC to couple into your receiver and negatively impact the ability to receive data. Usually DC blocking caps are used to block the DC before the transformer.
H: How can one battery provide multiple voltages A battery such as this is able to provide multiple different voltages based on user input. Given that the textbook battery provides a fixed voltage, how is this achieved? Are there multiple cells which are connected up together differently depending on the requested voltage or does it work in a different way? AI: the power supply can be configured to furnish different voltages by changing the circuitry inside it under software control. There are several different ways to control the voltage output of a "programmable" power supply like this. the engineering stack exchange guys can describe them for you. programmable supplies can be run off either wall power or internal batteries. In the dim, dark and distant days of electrical engineering, batteries with two or three different voltage outputs were commonly used in things like portable radios. In those batteries, the different voltages were achieved by packaging inside a single container different batteries, each with different number of cells in series, and running their outputs into a special plug connector with a different voltage available on each pin.
H: ULN2002A Not turning on? I ordered a few ULN2002As. I have it wired as follows: Pin 1 -> GPIO of Arduino Pro Mini 3.3V Pin 16 -> 300Ω resistor in series with an LED Pin 8 -> GND Pin 9 (COM) -> 5V (which is also RAW for the Arduino). Pins 1-4 were wired similarly with LEDs but I've removed 2-4 because it wasn't working and testing with 1 was simpler. When I apply 3.3V from the GPIO to Pin 1, nothing happens. When I disconnect GPIO and just apply 3.3V with a jumper, nothing happens. If I apply 5V, nothing happens. The output just swings around 300mV. Am I totally misunderstanding how this chip works? I thought applying at least Von (2V according to the datasheet) at Pin 1 should make Pin 16 turn on. AI: If you look at the datasheet, on the first page, under the heading of description, you’ll see that the ULN2002A is designed for 14-15V PMOS Logic. Looking at the functional block diagram on page 2, you’ll see that there is a 7 volt zener diode in series with the base of the first transistor. There’s no way that a 3.3V signal will bias that transistor. They do offer a 5V CMOS logic version (ULN2003A), that will probably work for you, in your intended application. Your confusion about ‘Von’ being 2V is because you were looking at the ‘test condition’ voltage and not the actual ‘parameter’, which is 13V. Look at the datasheet again. If you’re still confused, I can try to explain it in more detail. Hope that helped!
H: Differential op amp to measure negative current flow through shunt (Rails set to 0 and 5v) Note* I have done some searching and though there are related topics to this one, this one is unique and I think it warrants its own Question, especially as asking additional questions on another thread is prohibited. Measure Battery Current via Shunt This is sort of similar to my problem but the real question is different. I want to measure current and change it into an analog 0-5 voltage signal. I already have a shunt in place. I think it is 0.5mohms. The current will range from -50amps to +200amps (or even+400 amps is possible in the future). I am only interested in the reverse current. I don't care at all about the positive current. 0amp and +200amps can all output zero. I already have an op amp based circuit for other functions related to this project. All op amps on this circuit have rails set to 0 and 5v. If I was interested in the positive voltage only, I would think I could just put each side of the shunt to the inputs of a differential op amp So assuming that would work, would it work in the reverse direction? Well of course not, the difference between the two of them is negative, and will rail at 0v. But, if I switch s+ and s- as being input into the opposite inputs of the op amp, then their difference (v2-v1)is positive (-2mV - -4mV). So in my mind I think that should work, but the real question is, both of the inputs are below the negative rail. I haven't worked op amps with input signals beyond the rails. Is this why my prototype didn't work, or was there a bug that I didn't find? I also have some understanding that I will be measuring very small voltage drops across this thing. If I want to resolve as low as 5amps on a 0.5mohm shunt, that should be 5a *0.5ohm / 1k = 2.5mV. The input offset of the op amp is 200uV though, so I am assuming I should see an equivalent of 2.3, or 2.7mv difference going into the op amp input, am I mistaken? If I set the gain to like 200. then for about 50amps in the reverse I should be seeing about 5v as the output of the op amp. 10 amps would be about 1v. For my application it doesn't really matter if the input offset is skewing it a bit. I just need some sort of understanding of the reverse current amplitude. I realize this is a bit of a book; my apologies. Update1* I have just built a differential circuit using the same op amp. I gave the rails 0 and 5v, and set the gain to 10. I then varied eac of the inputs randomly between 0 and -1.4v, and took about ten measurements of the inputs and outputs, and I seem to be getting the desired results of G(V2-V1). note* I set r1=r2, and r3=r4 so that the differential op amp equation simplifies down to r2/r1(v2-v1), but anyways. It seems to be working for the voltages I tested, but the input voltages in the tests had a fairly large amplitude compared to what I will be getting off of the shunt. Any ideas? Update2* So seeing that I can get the correct output for inputs beyond the rails, that means that half of this question is answered. The question remaining is, will this work for inputs as low as -2mv? I am dubious about it. Here is one of the reasons why. As I was adjusting my negative voltages that were being inputted into the op amp (I was doing this with potentiometers) I would set one to, lets say, -0.5v, and go to adjust the other one. After adjusting the other one I could come back and check the originally -0.5v node only to find that it had shifted to -0.65v. I am getting some instability. I would normally combat this by buffering both of the inputs with a voltage follower before they made it to the subtractor + and - inputs. That, however, is impossible in this case as the voltage follower would simply rail down to 0 and the negative voltages would be lost. This begs the question, can a simple inverting op amp with the gain of -1 be used as an effective negative voltage follower and give the needed stability to the system without losing the negative input voltages? Any ideas? Update3* I will try this and see if it works for me. Inverting buffer with op-amps Why should I care if the signal is off by 3%? I don't care so much about the signal differences being exactly to scale. I just need to be able to see it going up and down. Here is a question. If I answer my question all by myself should I be deleting it or leaving it here for others to learn from? AI: This begs the question, can a simple inverting op amp with the gain of -1 be used as an effective negative voltage follower and give the needed stability to the system without losing the negative input voltages? Any ideas? Yes. simulate this circuit – Schematic created using CircuitLab I also have some understanding that I will be measuring very small voltage drops across this thing. If I want to resolve as low as 5amps on a 0.5mohm shunt, that should be 5a *0.5ohm / 1k = 2.5mV. The input offset of the op amp is 200uV though, so I am assuming I should see an equivalent of 2.3, or 2.7mv difference going into the op amp input, am I mistaken? Yes. Get a better opamp. The MCP6V81 has an input offset voltage of 9uV max.
H: Do the temperature coefficients of the resistors in the LM399/LM199 "Portable Calibrator" significantly effect the output stability? I'd like to know how voltage reference stability is obtained in products like 6.5 digit multimeters that use LM399 as a voltage reference in spite of using two gain resistors. Let's assume an ideal LM399 and ideal op-amp and ideal current source (so ignore the 200k and 5k resistors.) It would seem to me that even at 5ppm, the output would be very sensitive to temperature (relative to a real LM399.) So then how can this work? Is it the case that the resistor temperatures do not fluctuate much because they are near the temperature controlled heater? Or are they measuring the temperature with something like an LM35 (and matched tempcos on the resistors) and calibrating in software? Even at 5ppm/C and only 0.5C change in temperature (leaving out the trimpot), I'm calculating a range of 10.170 to 10.167 (gain ranging from 1.4634 to 1.4629) which would seem horrible for such an instrument. Do resistors effect gain in the way that I think they do? I've looked at schematics for a couple such multimeters now and they all use at least two or three discrete resistors and usually aim for an stable output voltage around +/-10V or maybe +/-12V, so is the "Portable Calibrator" a reasonable approximation of a real application? AI: Well if the LM399 has a 0.5 ppm/'C spec 0~75'C that's not good enough for 6.5 digit DMM with 1 ppm accuracy. However the LTZ1000 is 10x better at 0.05 ppm/'C. It is already thermally heated inside with thermal feedback. Normally better stability can obtained with a double thermal oven servo over the chip just as done in better OCXO's. Getting laser trimmed Resistor ratios to 1 ppm is a harder task. Never assume the accuracy is the same as the resolution. Sometimes the extra resolution in the short term is what is needed. Here is a 7.5 digit meter with 50 ppm accuracy ( only !) Here's a test that compared references Keysight 34498A ... innards of a 6.5 digit DMM Keysight's Truevolt 34465A DMM, the 1-year specification applies for temperatures ± 2 °C of the calibration temperature and with self-calibration achieves an accuracy in the 10V range within 24h if 10 ppm of reading and 4ppm of range. The only other Keysight DMM with ACAL is the high-end 3458A with its high-end price; a bestselling 8.5-digit DMM
H: Mesh Analysis on CCT with Voltage and Current Source I'm not sure where to apply the supermesh on this problem, would it be over the top mesh and the one containing the current source? I have only tried doing the problem without a supermesh but am not able to get an answer. Without using a supermesh I used KVL in each mesh and got an answer, however this is clearly wrong since I did not take the 6mA current source into account. The trouble I am having is identifying whether or not I need to just set the mesh current in the bottom left mesh to 6mA or to create a supermesh. Assuming that the mesh with the current source has a mesh current of 6mA resulted in an answer of 3V on for V0 AI: Here's the schematic, redrawn slightly to my own taste: simulate this circuit – Schematic created using CircuitLab Starting at the lower right-hand corner of each loop: $$\begin{align*} 0\:\text{V}+12\:\text{V}-R_3\left(I_1-I_2\right)-V_{I_1}-R_4\left(I_1-I_3\right) &= 0\:\text{V}\tag{$I_1$}\\\\ 0\:\text{V}+V_{I_1}-R_3\left(I_2-I_1\right)-R_1\: I_2 -R_5\left(I_2-I_3\right) &= 0\:\text{V}\tag{$I_2$}\\\\ 0\:\text{V}-R_4\left(I_3-I_1\right)-R_5\left(I_3-I_2\right)-R_2\: I_3 &= 0\:\text{V}\tag{$I_3$}\\\\ I_0=I_1-I_2&=6\:\text{mA}\tag{Known} \end{align*}$$ If you look closely, the variables you need to solve for are \$I_1\$, \$I_2\$, \$I_3\$, and \$V_{I_1}\$. You have four equations and four unknowns. No supermesh stuff, either. Once you solve for those values, just work out the current in \$R_5\$. From that, you can figure out the voltage drop magnitude and also the polarity (from the current direction.) Using Sympy/Sage, it's just: i1, i2, i3, vi0 = S('i1 i2 i3 vi0'.split()) e1 = Eq( 0 + 12 - 4E3*(i1-i2) - vi0 - 2E3*(i1-i3), 0 ) e2 = Eq( 0 + vi0 - 4E3*(i2-i1) - 8E3*i2 - 6E3*(i2-i3), 0 ) e3 = Eq( 0 - 2E3*(i3-i1) - 6E3*(i3-i2) - 8E3*i3, 0 ) e4 = Eq( i1 - i2, 6E-3 ) r = solve( [e1, e2, e3, e4], [i1, i2, i3, vi0] ) r
H: Center tap of transformer for Ethernet interface When you see Ethernet PHY chips' application circuit, you can find out the difference about connecting center tap of transformer. Some PHYs should be connect to the DC power but some don't connect to it. The following circuit diagram is for LAN8720a interface circuit. It should be connected to the AVDD directly (Case A in the image). But some other PHY application circuits are different (Case B). Now, I have some questions: Why application circuits are different according to the PHY? What is happenning if I connect the center tap to the power for the Case B PHY? Thank you. AI: It depends on the output type of the PHY. If it has open-drain outputs then it requires the center tap to be connected to Vcc. If it has push-pull outputs then the center tap has to float. There are probably silicon-level trade-offs like area vs. power vs. speed vs. EMC vs. manufacturing process etc. that lead to choosing open-drain or push-pull configuration. If you connect the center tap to Vcc for a PHY that does not allow that then in the best case it may just work, or it may only work for a particular speed, or it may not work, or you will blow the chip.
H: common mode choke placement in smps I am designing a hi end SMPS and I have 1 CMC on the input (ac line) and a second on the DC output. My question is where to place this second CMC. Usually its best to place filters right at the source of noise. I assume that source of CMN is the secondary output of the transformer . Of course I cannot place it after the transformer , so I am thinking to place it after diode + pi filter and Cout. Is the above better or I should place it at the cables (DC output) ? AI: Many devices have 2 external cables: a power supply and a signal. They make 2 good antennas for EMI. The purpose of a CM choke is to prevent wiring (external and internal) to act as antennas, thus suppressing RF emissions. Therefore the logical place for CM chokes is where the wires leave your device or PCB. The entire common mode current must pass (actually: not pass) through the choke, so make sure that there is no parasitic bypass path, like a capacitance or 2 magnetically coupled loops. This risk is greater if you put the choke earlier in your circuit. Our EMC experts used self-made current loops with BNC connectors, connected to an oscilloscope or spectrum analyzer, to locate the sources of EM emissions. Then you know which loops to make smaller, or which common mode currents to suppress. A decoupling capacitor is for making RF current loops smaller. The problem with CM current in external cables - antennas - is that it flows into an very large loop, so suppressing it with chokes is the only option. The easy way out is to put chokes on the actual cables, but then the user must not substitute cables. If you see a choke near or inside the mains plug then that is for suppressing conducted emission, to satisfy legal rules about not polluting the mains. Normally radiated emission and conducted emission go hand in hand, think antenna and ground. If 'you' only stop the CM current at the end of a cable then I think 'you' are just making a dipole antenna and fooling yourself.