text stringlengths 83 79.5k |
|---|
H: Christmas LED lights connections
I have several Christmas lights with different AC adapters.
First case
In the label of one of them, I read:
Primary: 220-240 V ~ 50/60 Hz
Secondary: 31 V DC 3.6 W
In the label of the cable connecting this AC adapter to the LED array:
Lamp: 3.2 V
0.064 W
80 LED
I can not understand if this is referred to the single LEDs, but the total amount of LED seems to be 80.
How are they connected, then?
In the case of a series connection, each of them would have a 0.3875 V voltage drop, which maybe is too small for a LED. If they were parallel, 31 V seems too much for each of them, and the total available current 3.6 W / 31 V = 0.1161 would be 0.00145 per LED. So, neither of these solutions seems correct.
Second case
In another LED string, I have a 4.5 V battery power supply, with no labels and therefore no indication about the current and the number of LEDs. Do they have a parallel connection?
AI: The LEDs must be connected in series strings of LEDs. A string of 8 LEDs would work, with a single small resistor to linit the current. Connect ten strings in parallel, and you have 80 LEDs.
Alternatively, connect eight strings of ten LEDs, and rely on current limiting in the power supply. This will work if the LEDs in the strings are all matched.
For the 4.5V version, connect all the LEDs in parallel, with a single current limiting resistor. Again, all the LEDs need to be reasonably well matched. |
H: How can a resonator do band-pass filtering?
I've read many times some statements about resonators like this (reference):
Resonant circuits are circuits, which offers a high impedance or low impedance (for parallel and series
resonance respectively) to the source at a particular frequency of operation. The frequency at which the
resonant circuit has a very high or low impedance is called its resonant frequency.The frequency selectivity property of resonant circuits are exploited in building filter circuits
1) It's a basic concept that is written everywhere, but I cannot understand in practice how resonators may filter an input signal. Precisely, let's consider a parallel RLC circuit:
It may represents different kinds of resonators. It's simple to find that the input impedance of the parallel RLC circuit which represents the resonator is the following one (in terms of real and imaginary parts):
If we look at the graph, I understand the initial statement: this graph is very frequency - selective.
But let's look at the RLC circuit: what does it filter? There is an input voltage signal. If I take the voltage across one of RLC components, it will be the same of the source. I'd say that this circuit, given an input voltage signal, gives an output filtered current. So, how should I read this filtered current (from a circuital point of view).
2) What If the parallel RLC circuit is only a model (not a real circuit) of a cavity/waveguide resonator (i.e. a closed tube of metal in which there is my electromagnetic source)? My electromagnetic source (consider a simple antenna) generates an EM field with some frequencies, and the cavity selects only specific frequency. How may I read the filtered signal?
AI: The circuit you show isn't a useful filter. The current from the voltage source is nicely bandpass filtered, but in the real world that's not terribly useful.
A useful model for an RF filter would be something like this, with a source that has a real source impedance (represented by R1), and a load that also has some real impedance, not to mention that the inductance will come with its own bit of effective impedance. This should have a band-pass characteristic, with the center frequency of the filter determined mostly by the values of L and C and it's width determined by L, C, and the resistances.
simulate this circuit – Schematic created using CircuitLab |
H: Compare output of logic gates but ignore their difference due to propagation delays
I am triggering a current source for a coil. I made a safety circuit with a bistable multivibrator, which forces the trigger sent to the hardware to have some maximum duration. This is a duty-cycle protection for the coil, to avoid it reaching high temperatures. So, even if the user leaves the trigger on HIGH for too long, the duty-cycle protection box makes sure the hardware does not stay on for that long (I also have a second multivibrator that makes sure cannot turn the coil back on before a certain set time but that's not needed here).
I would like to add a warning light to let the user know that the trigger they sent was ON for too long, and the safety box had to switch it off.
A simple way could be adding an exclusive OR gate that compares the user-trigger and the hardware-trigger, so that the output is HIGH only when the two triggers are different. This HIGH drives an LED which gives me a warning:
Problem: I am guessing the LED will be turned on even during the "propagation delay" interval shown in my first diagram, where the two triggers are different. I know it might be for a split second, but I want to add a D flip flop with a reset button to this circuit so that the LED stays on until the user notices it and pushes the reset button.
How can I only make my circuit "not care" about the difference between the triggers due to the propagation delay?
AI: You only want to give a warning if the input trigger is longer that the output trigger, so...
Use the trailing edge of the output trigger to clock a 'D' F/F whose Data input is connected to the input trigger, and take your warning from the F/F's Q output. Waveforms look like this:-
Top 2 traces shows input trigger shorter than output trigger. High to Low Clock occurs when when D is low (circled), so the F/F Q output stays (or goes) low.
Bottom traces show what happens when the input pulse is longer than the output pulse. F/F's Q output (red) goes high when the high D level (circled) is clocked into it.
The 'alarm' output output will remain high until the next pulse, when it will either stay high or go low again depending on the relative pulse widths. This may be good enough for your purposes. If not then use that output to set a latch or clock another F/F which must be manually reset by the user. |
H: Diode doesn't work in a [attempt of] power supply simulation with QUCS
As a simulation exercise, I am trying to work out a power supply and actually I am at the point provided in the figure below
There is a message of error which I don't understand, but I realized it happens only with the presence of the diode in the circuit.
In fact, in order to understand this, I did two other attempts: a separate circuit on the right with a diode which works fine and a short circuit around the diode itself which is reported in the next figure
and still it works fine.
So the question is:
which is the problem with adding that diode in the circuit?
AI: There is a warning message relating to the fact that you have the secondary of Tr1 (the transformer) floating. It would be better to ground one side for the purposes of simulation.
I don't see an error message. |
H: Can you get an electric shock from a UPS output? If yes then can it be fatal?
I have an UPS of 800 watts which means its output is about 800 watt.
I am installing a current leakage breaker to my main supply so that if I accidentally touch any things which produce current my breaker shut down the main supply.
I have also installed a UPS.
I want to know if the UPS power could be a fatal to any one. If "yes," then should I also include it in my leakage breaker circuit?
AI: Yes, an 800 Watt UPS can potentially be lethal. It is very possible that the UPS outlets already contain current leakage protection (sometimes known as GFI, GFCI or RCD). So it may not be necessary for you to add it. Also, if you do want to add it, you could possibly do it using a GFCI extension cord or make up a separate outlet box with a GFCI so you don't have to take apart the UPS (and maybe void your warranty).
I know you may not be in North America, but I am sure similar products exist in other areas.
I found the image on leviton's website. Can't imagine they would object to me sharing it. The actual URL was from Shopify. |
H: Can i connect batteries (both 12v and SLA) of different amperage (100amp with 180amp) in parallels?
I have a 250W solar panel connected to a 12V 100A (SLA) battery through a charge controller (12 and 24V). A 1100W inverter is also connected to the battery.
Now I want to connect a 12V 180A battery in parallel to it. Is it possible? What effect will it have on the batteries?
AI: It depends a lot on what the charger is doing. The simplest ones don't attempt to fully charge the battery and instead provide a constant voltage of 13.8 (or sometimes 13.6) volts, which will mostly but not fully recharge a lead acid battery of most any type. If your "charge controller" is that type then putting the batteries in parallel should work. If the charge controller is a fancier one that actually monitors the rate at which the battery is charging and adjusts the charge current as it goes, then putting two dissimilar batteries in parallel could end in tears, because the way that charger works depends on putting the battery to a substantially higher voltage while it's still sucking up the current. If one battery is fully charged but the other one is sucking ap all the current the charger can offer, then the fully charged battery will overcharge, a situation that can be bad for the battery but also for you if it swells up like a beachball then pops just as you walk in to inspect it. |
H: Is it dangerous/harmful to increase the voltage supplied to a 12v electromagnet?
I have a 12V-rated electromagnet which I bought online. As it isn't strong enough for my needs, I tried applying a higher voltage (~20V) which did make it stronger.
It does get a bit warmer after a while but I wanted to know if apart from the additional heat, such an increase in voltage could harm the magnet, as I know that when building my own electromagnet, I can increase the voltage as far as I want as long as the heat doesn't become a problem.
AI: Is it strong enough already?
If it's being used without the armature to pick up nails, then it's as strong as you find. If it's being used for its intended purpose of holding a door closed, then it could potentially be made significantly stronger without increasing the drive voltage.
The magnetic field the lock generates depends on the current, and the magnetic circuit through the return plate. If the plate surface is rough, or not flat, or being held at an angle by the door attachment, then small airgaps will reduce the magnetic field and the holding force. You do not need much of an airgap to wreck the holding power, 0.1mm (thickness of a sheet of printer paper) could reduce it to 50% or worse.
If the pole faces of the magnet and return plate show machining marks or are bowed, then carefully lapp the two facing surfaces until they are smooth meet properly.
If the return plate is screwed tightly to the door, slacken the attachment screw slightly so that it can float, and rotate slightly, to mate fully with the magnet. You'll notice the return plate has only a single hole in the centre, for this very reason.
It may be that attention to these mechanical details will improve your holding force sufficiently.
Can you make it stronger with increased voltage?
If the magnetic circuit cannot be improved, then increased voltage, which means current, will improve the magnetic field strength, up to the point when the material becomes saturated. There will be no significant improvement above that point. If you are going to increase the voltage, then plot the increase in holding force versus the increase in voltage. There is no point using it at a voltage higher than necessary.
What can go wrong with increased voltage?
At the low voltages involved, direct breakdown of the insulation is unlikely. Wire insulation needs to be strong enough to withstand handling, which means it's likely to be good for hundreds of volts. However, beware the kickback when you disconnect the magnet. An increased drive current will means higher energy, which means a higher kickback voltage. This should not be a problem when used with a proper kickback suppression diode.
Heat is the main problem. Current is not, of itself, a problem.
Increase the drive voltage to 20 from 12, and you've increased the power and so the temperature rise by a factor of 2.8 times.
High temperature will at best reduce the life of the insulation, and at worst start a fire.
If you use it for a short while, so that its temperature does not exceed it rated temperature, and allow it to cool down again before using it again, then you can safely tolerate it running at a higher power.
How to determine its rated temperature? How to determine how long you can run it? Measure the winding resistance. Run the magnet at its rated voltage for long enough that it reaches thermal equilibrium. Measure the winding resistance again. Copper increases in resistance by about 0.4% per degree C. This gives you the temperature rise. Now do a timed run at the higher voltage, measuring the winding resistance every few minutes. Stop when the measurement shows you it has reached its previous temperature. |
H: Displacement strategy for (Computer) Cache
Here, under displacement strategies, the following is written (Unfortunately, only in German available):
Laszlo Belady's method of displacing the memory area that will not be
accessed for the longest time is optimal. However, it is applicable
only when the complete program flow is known in advance (i.e., it is a
so-called offline procedure, in contrast to FIFO and LRU, which are
online procedures). However, the program flow is almost never known in
advance; therefore, the optimal procedure cannot be used in practice.
However, the optimal algorithm can serve as a comparison for other
procedures.
I wonder why the program has to be known in advance to make this routine an optimal method. Sure, I understand, that, sometimes, when a memory not used for a while, will be thrown out but needed immediately afterwards, is not optimal but this behaviour is the most reasonable and hence, "optimal"? It's actually not about the wording, I really wonder this method isn't as good as it seems.
AI: Laszlo Belady's method of displacing the memory area that will not be accessed for the longest time is optimal
For you to know when something will be accessed in the future, you need to know when your program accesses which part of memory.
The author of the wikipedia article confuses this with knowledge of program flow, which is not the same thing (memory access locations / patterns is one of the things that arise from the program flow). |
H: Understanding L-Pad attenuator
I'm not understanding the concept of L-Pad attenuator other than seeing it as another ordinary voltage divider to attenuate a voltage. Electronics-Tutorials: L-pad Attenuator gives the following description:
L-pad attenuators are commonly used in audio applications to reduce a
larger or more powerful signal while matching the impedance between
the source and load in provide maximum power transfer. However, if the
impedance of the source is different to the impedance of the load, the
L-pad attenuator can be made to match either impedance but not both.
I understand maximum power transfer occurs when input impedance matches output impedance (or \$ Z_{in} = Z_{out} \$), and that \$ \text{dB} = 20log({V_{out} \over V_{in}}) \$. In terms of impedance matching, looking into the attenuator from the left, \$ Z_{in} = (Z_L || R_2)+R_1 \$ and from the right, \$ Z_{out} = (Z_s + R_1) || R_2 \$ in series. Hence for the two impedances to match and if I'm given a voltage attenuation ratio, I'll have exactly the following two equations to solve for two unknown values of \$ R_1 \$ and \$ R_2 \$:
Matching input and output impedance: \$ (Z_L || R_2)+R_1 = (Z_s + R_1) || R_2 \$.
Voltage attenuation ratio(): \$ V_{out} = V_{in} \big({{R_2 || Z_L} \over {({R_2 || Z_L})+Z_S+R1}}\big) \$
Aren't these two equations all I need to match input and output impedance while achieving a specific attenuated voltage? Why are there these logarithmic equations with a 'K' value solving for \$ R_1 \$ and \$ R_2 \$? And what does it mean when they say
L-pad attenuator can be made to match either impedance but not both
AI: L-pad attenuator can be made to match either impedance but not both
What it means is that a simple L-pad made from two resistors can do one of two things: -
Provide an attenuation
Match two impedances
But you cannot simultaneously match two impedances AND have independent control of the desired attenuation level. The reason is because there are only two resistors and there are not enough variables (R1 and R2) to get both a defined attenuation AND a defined impedance matching circuit.
To get both, you need at least three resistors. This webpage gives an example and shows the two formulas that define R1 and R2: -
As you should be able to see, if you define \$R_{IN}\$ and \$R_L\$ from the outset, you get unique values for R1 and R2. Those unique values are fixed by \$R_{IN}\$ and \$R_L\$ and produce an attenuation that is fixed. In other words, it is the value of \$R_{IN}\$ and \$R_L\$ that produce R1 and R2 and that, in turn, means no control over what the attenuation is. |
H: Series resistance on crystal
I have an STM32 chip with a 12MHz crystal connected to it (as seen in the figure below.) I've been advised to place a 220 ohms resistor 'in the OSC_OUT connection'.
I know it has to do with the termination resistance but I'd then expect it to be placed between the crystal output and the capacitor to the ground rather than in the connection between the crystal and the STM.
Could you explain to my why 220 ohms is a good value? I've found suggestions of for instance 33 ohms elsewhere (also by the book of Ott. )
Where should it be located exactly and why?
And maybe a very dumb question, but what's the difference between the left and right output of the crystal?
AI: I'm assuming you mean placing the series resistor as shown below: -
Could you explain to my why 220 ohms is a good value? I've found
suggestions of for instance 33 ohms elsewhere (also by the book of
Ott. ) Where should it be located exactly and why?
There will be resistance in the internal output from OSC_OUT and that may be enough to initiate oscillations. But, if you don't have a series resistor, there is no guarantee that there will be enough phase shift around the loop to cause the circuit to oscillate at all let alone oscillate at the right frequency.
This type of pierce oscillator needs a 3rd order filtering effect to take place and the resistor (the added one) and C26 in your original diagram add an extra phase shift of 10 to 20 degrees; enough to start oscillation.
Using a bigger resistor is usually not a problem and is often done to reduce the power dissipated inside the crystal so, you choose the series resistor based on the crystal you are using and the supply voltage of the inverting gate inside the oscillator part of your chip.
If you want a full tear-down on the components of a crystal oscillator, this web-page may be of interest to you. It takes you through the reasons why the crystal oscillates and explains why it may not oscillate at exactly the right frequency you expected it to such as in this diagram: -
Varying the loading capacitors shows how the crystal oscillation frequency may be retuned: -
And, if you change the drive resistance (the added series resistance) you also get slightly different oscillation frequencies: -
The web page also shows the effect of the gate and it's propagation delays.
what's the difference between the left and right output of the
crystal?
A crystal is symmetrical and either terminal (left and right) can be swapped. Regards the chip itself OSC_IN is an input and OSC_OUT is an output.
You also need to check whether you need a 1 MΩ resistor between OSC_IN and OSC_OUT - some chips have it built internally but many don't. It is used to bias the chip's inverting gate into the linear portion of its signal range. It's basically like applying a feedback resistor across an op-amp i.e. it's negative feedback that sets the DC conditions correctly. |
H: Electromagnetic coupling on printed circuit board traces
When I applied the following damped signal in the proximity of my product (the magnitude is around 10 kV and the frequency is around 400 kHz) through a cable of 4 meters, the microcontroller sees a signal normally at 3V3 (HIGH LEVEL) which goes down to a LOW level (this appears very few times, I would say 1/100.)
I think this is a problem of electromagnetic coupling, even if at this frequency I should have a trace length equal to:
$$trace_{length} = \frac{c}{4*f}$$
Where \$c\$ is the speed of light and \$f\$ is the frequency of the signal.
So the trace length should be equal to 183 meters, which is clearly not the case. The trace length is equal to around 10 cm. Nevertheless, when I plug a probe on the affected signal, i.e. I add a wire of 1 meter, it clearly works more often!
I would like to know: How can I confirm that it is a electromagnetic coupling and how can I protect my circuit from it? (Other than reducing the length of the trace.)
I think the problem is present on other signals, but the microcontroller is particularly sensitive to the fluctuation of this signal.
AI: I would like to know: How can I confirm that it is a electromagnetic
coupling and how can I protect my circuit from it?
At 400 kHz and with the short length (10 cm) of "antenna" you used, the coupling is not electromagnetic but just plain ordinary electric coupling (one half of EM coupling). Basically, it's capacitive coupling of the 10 kV damped waveform to the input.
Protection methods: -
rail clamping diodes
series resistors
parallel capacitors
parallel TVS diodes
a buffer circuit
Are the obvious ones to to try but, they might also "ruin" the signal that is on that line so you need to be a little more open about what that line is used for and what signals are on that line. |
H: Correct way to wire a 5V relay using BC547
i am trying a relay circuit. I saw one diagram online in google images and another one when i actually test. Here are the two diagrams:
Diagram 1
This one i tried but not worked for me. But its a common diagram available on all over the internet.
Although it did worked once for me. But not sure because if its relay or something else.
Diagram 2
This one is working absolutely fine in when i actually make this circuit on bread board. But not sure if this is the correct way to connect to relay. As my emitter is connected to coil pin 1.
Which one should i refer to? In case if i am making any mistake in understanding something then please guide.
Here is my relay:
5V 7A 5Pin PCB Mount
Any suggestions the right way!
Thank you!
AI: The first diagram is correct. The second one will "sort-of" work with enough base drive because the transistor is in inverse mode. The low beta will mean that 10K is too high for it to work reliably.
If I was to guess what happened with your tests, perhaps you used the incorrect pinout for the "European" BC547, which is just the reverse of the typical US part:
Japanese parts such as (2S)C8015 have a third standard pinout in TO-92. |
H: How does signal aggregation work?
I just wanna know how signal aggregation works.
For example, there is an IC which can aggregate more than two interfaces(I2C, SPI .etc).
Thanks.
AI: Aggregations are processed signals.
An aggregator reads the raw signals and returns interesting summaries, ranging from simple analog sums to sophisticated digital functions. |
H: Common Base Amplifier Confusion
In the CB configuration, a signal is applied to the emitter to create an in-phase amplification at the collector. I am confused, however, on how exactly this happens. In the picture below, if the input signal raises the emitter voltage on the positive swing, it would mean that there is a larger voltage across the emitter resistor, which by Ohm's law entails a larger current and thus a larger voltage drop at the collector resistor. For an in-phase amplification to happen, there needs to be less current at the collector, which means I am failing to understand this configuration. Can anyone please help clear the fog?
AI: jm567 - perhaps you have forgotten that the collector current depends on the base-emitter voltage. It is the well-known exponential relation Ic=f(Vbe).
When the base voltage Vb remains fixed and the emitter voltage Ve increases, the voltage difference Vb-Ve=Vbe decreases - and with it the collector current. As a consequence, the collector voltage increases.
Comment 1 (Edit): I like to take the chance for comparing the two possible explanations for controlling the collector current in a common-base stage:
(1) An externally applied RISING signal voltage at the emitter node will REDUCE the voltage Vbe and, therefore, REDUCE the emitter current according to Ie=f[exp(Vbe/Vt)]. Because of Ie=Ib+Ic both currents, Ib and Ic, will also DECREASE by the same exponenetial expression. This is because there is a nearly constant relationship between Ic and Ib (beta).
(2) An externally applied RISING signal voltage will cause a corresponding (small) current change delta(ib) from the emitter to the base (thereby REDUCING the postive DC base bias current +Ib in opposite direction) - however, this small base current change delta(ib) is part of a larger current change delta(ie) because of delta(ie)=delta(ib)+delta(ic).
I think, it is obvious that delta(ib) is NOT the physical reason for delta(ic). All the currents change at the same time due to Vbe change in the same direction.
Comment 2: I must admit that I really cannot understand some parts of the discussion about the "common base amplifier confusion" (title).
The questioner could not see how the collector current will decrease when the voltage at the emitter node of the shown circuit increases (due to an input signal).
Is there any doubt that the transistor will allow an emitter current Ie when the device is "opened" with a voltage Vbe=0.7V and that this current Ie is split into Ie=Ib+Ic? Is there any doubt that this current will be smaller for Vbe=0.65 V ? So - what is the problem? The voltage Vbe=Vb-Ve will be reduced when the emitter voltage Ve is slightly increased due to the input voltage Vi at the emitter node. As a consequence, all three currents (Ie,Ib,Ic) will - according to Ie=Ib+Ic - decrease. Thats all!
I cannot understand why - according to a written comment below - this view should be "misleading"? |
H: Why would you right shift 8 bits for 8 bit register?
MCU: Atemega328P Datasheet: https://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-7810-Automotive-Microcontrollers-ATmega328P_Datasheet.pdf (Page 143 - USART)
Note: I am beginner :)
Using the BAUD rate formula found in this picture we can calculate the UBRRn value for 9600 BAUD rate. Therefore UBBRn = 103.
103 is loaded into UBRR0H and UBRR0L with UBRR0H taking the 4 most significant bits. According to the datasheet this is the method to load 103: UBRR0H = (103 >> 8); UBRR0L = 103;
103 = 1100111
Wouldn't this just load 1100111 into UBRR0H and shift right 8 bits pushing the entire value out of the register to give 00000000?
And in UBRR0L would this not give 01100111?
FROM DataSheet
AI: This is a generic method. The ubrr is an unsigned int which makes it 16 bits wide (2 bytes).
To break this into discrete bytes like in the example, you need to shift the bits right by 8 to get the high byte.
In your specific example it doesn't matter because the high byte is empty anyway, but what if you need a ubrr of 3,096?
Also, you might be confusing order of operations. Your example doesn't load the register then shift it out, it takes the binary representation of 103, shifts it, then applies the result of that to the register. |
H: Reason of using a resistor parallel with NTC at mains input
I have a SMPS circuit. There is a serie connected NTC at Neutral input. I know NTC is used for inrush current limiting purposes. But there is a 20ohm 2W resistor which is parallel this NTC and I don't know what is the aim for it?
AI: This may be done to get a specific value of the NTC temperature co-efficient. Not all values of resistances are available as standard, and if you need a really specific value of resistance it is far cheaper to change its resistance by adding a resistor in series or parallel than buy a specialized NTC resistor |
H: How to multiplex 8 pieces of 74141 without transistors?
I'd like to use 8 pieces of 74141 in order to drive the same amount of Nixies with an Arduino. What approaches will I need to consider?
Update: I'm able to drive 1 single Nixie with a 74141. The problem here is that there are not enough pins on Arduino to drive more so I'll need a solution for this.
AI: The trick is that you don't multiplex the 74141 parts, you drive them directly from a dedicated driver. You need to store the 4-bit value in some type of memory device, such as a latch, flip-flop, shift register, or more complicated IC. There are a lot of ways to attack the problem, depending on what you're most comfortable with.
Use a 4-bit latch or flip-flop (7475, 7477) to store one digit at a time. Use 8x 4-bit latches, with each clock/enable pin brought out separately.
Use a 4+ bit shift register with parallel outputs (74164) with each chip chained to the next one. Uses only 2-3 Arduino pins, but you must update all digits simultaneously.
Use an I/O expander (MCP23017) and an I2C interface on the Arduino. This allows you to address each digit individually while only needing two pins on the Arduino. |
H: NE555 - astable flashing alternative LED circuit
NE 555 help - astable alternate flashing LED circuit
"In the circuit, the LED1 lights when the output pin has a HIGH state and LED2 lights when the output is in a LOW state." - I couldn't understand this statement in connection with the diagram:
Here I see a normal LED1 connected to output terminal 3 and to ground and I can understand as how Vcc / high output will appear and then the LED will turn on ,
What I can't understand is how LED2 turns on when output is low: we know LED2 is connected to Vcc and the negative terminal to output 3, when current flows through it and turns it on, the output at 3 should be low to have a voltage difference and hence led 2 turns on.
But then when current flows through LED2 - after flowing through 2, where will it travel, as pin 3 is the dead end as the output there is low, can current flow inside the push-pull amplifier (which inverts the output from flip-flop ) and then flow to ground via discharge transistor or am I missing something?
AI: Your circuit redrawn is circuit 1 and there are two possible states circuit 2 and circuit 3.
Circuit 2 is the High state of the 555 timer. In this state the voltage across the top led is 0 (9V on anode and 9V on cathode) which means no current flows and the led is off. Meanwhile the bottom led is on because there is a positive voltage across its terminals (9V on anode and 0V on cathode.). This means that current flows from the 555 through the diode to the ground.
Circuit 3 is the Low state of the 555 timer. This state is the opposite of the previous. Now there is a voltage drop across the top diode and no drop across the bottom. This means that the top is on and the bottom is off. The current flows from the 9V source through the top led and into the 555.
simulate this circuit – Schematic created using CircuitLab
The internal circuitry of the 555 output pin is essentially this:
simulate this circuit |
H: Identification of Pin1 on LAN8720A 24SQFN package
can anybody help me identify which is pin 1 please? - which corner and side is pin 1?
I find the data sheet and the actual picture confusing between the Datasheet
and the device
AI: It's the white dot just below the letter B.
If you scroll on the next page of datasheet where you copied that picture, there is a picture with chips packed in tape and it also shows the alignment of pin 1 in respect to the SMSC logo |
H: Unknown electrical component in circuit - "Chock"
Does "chock" (in the picture) mean inductor with unknown value?
AI: Choke is like a slang often used for a "really big inductor value" to "choke" ripple voltage by raising the series impedance above LC resonance frequency, shown by the intersection of green arrows for 2 useful values of chokes.
There are many clues to your schematic which show it was done by a nice novice with English as a second lang. (ESL)
The choke inductance value does not mean high current or even low resistance (DCR) so choose wisely.
So you may find this chart very useful for reading Q and thus resonance amplification for series or parallel RLCf circuits.
So "CHOCK" this one up to experience. I 1st used it in 1975. |
H: ESP32 stalls b/c of current spike (inrush current) - how to limit inrush current?
I have a circuit with several neopixels and an ESP32 MCU running with micropyhton. Under full load, the circuit including the ESP32 draws around 200mA at full load. Now I'd like to add another component, which is not controlled by the ESP32 but uses the same +5V net to be powered. The additional component is controlled by a switch. In normal state, it draws ~330mA. However, initially when I turn it on, it causes a current spike (I assume inrush current), which peaks at ~500mA. After the initial spike it goes back down to ~330mA. This causes the ESP32 to stall and not operate normally anymore until I take the load off.
For context, I'm running the circuit with +7.5V from a power supply capable of delivering 3A. The voltage is dropped to +5V via a LV7805CV 1.5A (including heatsink and decoupling capacitors). I'm looking to replace that with an LV78S05CV, which can handle 2A.
My question is how can I mitigate the initial inrush current of the component to prevent the ESP32 from being affected and continue normal operation. I understand that the current draw is very sudden and happens when the switch is pushed before the current draw levels off at ~330mA. Do I need inrush suppression?
Would an RC circuit before the additional component like this
How to limit inrush current?
make sense?
N.B. The additional component can be turned on/off anytime and multiple times during normal operation of the ESP32 causing a spike every time that happens.
Please let me know if something isn't clear enough.
AI: I'm looking to replace that with an LV78S05CV, which can handle 2A.
Instead of replacing the original regulator, add another one to power the problematic 'component'. This keeps the surge current away from the original regulator, which then only has to handle voltage sag on its input. A 78M05 wiould be a good choice for the second regulator because it limits current to ~0.8 amps maximum.
If your power supply can deliver this current without sagging below 7 V the regulator powering the ESP32 should have no trouble maintaining a stable output when the 'component' is switched on.
If your power supply can't handle the surge current without dropping below 7 V then you can can add a diode and capacitor to the ESP32 regulator input to hold its input voltage up during the sag. The circuit looks like this:-
simulate this circuit – Schematic created using CircuitLab
Diode D1 prevents the 'component' from drawing current out of reservoir capacitor C1 when the power supply voltage sags.
Use a high current Schottky diode for lowest voltage drop (important because the 7805 has a 'dropout' voltage of ~2 V). Using an LDO (low dropout) regulator such as the L4941 to power the ESP32 could also help, as the reservoir capacitor can then discharge a bit more before the regulator drops out of regulation. |
H: Step up 3.3V to 5V and draw 500mA without a voltage collapse/break down
I would like to step up 3.3V from my power supply with the help of a dc-dc-boost converter to 5V and then I want to draw 500mA by placing a 10 Ohm resistor between the output-pin of the boost-converter and ground.
The problem: The voltage drops down to 3.3V. (With 100 Ohm not)
Is there somewhere a module which can supply such a current draw? If yes, which one - sot-packages preffered?
I already tried those converters: MT3608, Pololu S7V8A and SX1308
AI: a generic answer:
Your module has to be able to provide at least 500mA - however usually in engineering you have to account for certain safety margin - so if you need exactly 500mA output, the module should be able to provide more (how much is a different topic then your question, but still important to consider in your design).
So as a ballpark calculation Id calculate like this:
With Safety factor i: 1.5
P_out = I_out * U_out * i = 0.5A * 5V * 1.5 = 3.75W
As you boost converter has only a certain efficiency the input power into it should be higher:
With a efficiency n_eff of 85% <- just a generic value as example, you should check the data of your module
P_in = P_Out / n_eff = 3.75W / 85% = 4.41W
Out of this input per you can calculate what supply is necessary for the module:
I_in = P_in / U_in = 4.41W / 3.3V = 1.34A
So if your 3.3V power supply can provide 1.34A and your module 0.75A / 3.75W, then you should not see the voltage stop with a static 10Ohm load resistor. |
H: Constant or variable current when using a solar charger?
I am looking at the CN3083 chip for charging a Li-po battery using a small solar panel (5 V, 0.15 W).
To set the charging current, the pin ISET is used with a fixed resistor to set the constant charge current (max charging current = 600 mA).
In which case would I need a BJT to control the charging current rather than a fixed resistor?
Is there an advantage in having a variable charging current rather than a fixed one?
For example, if my solar panel is outputting 4.2 V or 5 V, wouldn't it just be better to keep the charging current as high as possible (600 mA)?
Here is the schematic of the IC I am using (CN3038).
AI: As long as it can keep Vin at 4.35 V or more, the CN3038 will adjust the charging current to what the source can deliver, as described on page 7 of the data sheet.
So, you can set the charging current to the maximum your solar panel can deliver (about 30 mA) or more, and the CN3038 will do the rest.
If I understand the datasheet correctly, setting the charging current to, say, 500 mA will also work; the CN3038 will reduce the charging current to what the solar panel can deliver at about 4.4 V, so there is no real reason to make the charging current variable yourself.
The CC phase of charging will not actually show a constant current when a solar panel is connected as a power source; it will be more like "use all available current (up to a maximum) without trying to make the current constant, until Vmax has been reached".
Note that if you do set the charging current to a high value, and you exchange your solar panel for a more powerful one, you would have to check if the set charging current isn't too high for the battery used. |
H: Electrons flow in Li-ion Battery
I’m new to electricity, and I’ve got a question about electrons flow inside a Li-on battery. A lot of articles and videos show us only Li+ ion flows from anode to cathode through electrolyte during discharging of battery. But there is no info about e- flow from cathode to anode inside battery. As I know, there must be closed loop of electrons flow, but also there is conception that separator doesn’t let e- to flow through electrolyte. So how electrons pass through Li-on battery to comply with close loop conception?
AI: Always there must be a complete circuit: a closed loop of flowing charges. But no electrons flow in electrolytes. The amperes inside the battery-electrolyte are composed of flowing charged atoms; the lithium ions.
Flows of positive charge are opposite that of electrons in wires. Of course this means that, inside the battery, at one electrode surface, the +Li and the electrons are coming together and canceling out, forming neutral lithium atoms. And at the other electrode, Li atoms are donating extra electrons to the metal surface, then corroding away as +Li ions, and flowing off into the electrolyte as amperes of current.
Amperes are the coulombs of charge flowing per second. NOT just electrons flowing per second. The coulombs of charge can be ANY mobile charged particle, and conductive salts/acids/humans contain no mobile electrons to do the flowing. (Only electrons alone are flowing ...inside metal conductors. Instead make that solid, unmoving metal conductors. If your conductors are liquid metal, or are vibrating metal, then the protons within the positive metal ions are moving as well. For amperes made of pure electrons, your metal conductors must be both solid and static.)
Non-electron currents? They're extremely common. Whenever a human gets zapped by high voltage, no electrons flowed through their body. The amperes inside human tissue was made of flowing +Na, +K, -Cl, +H, OH ions only. And current in the ground (damp dirt) has no electrons, just ion-flows. And in lead-acid batteries, all the amperes within the battery acid are actually made of flowing protons! (Same thing happens inside fuel cells, where the electrolyte-layer is a proton conductor or "solid acid" material.)
If you find a textbook which claims that electrons flow through salt water or through acid, throw that book away! That single misconception is bad enough, but probably the same books has even worse mistakes. Once they get into your brain, you may never get them out again. Heh, even better, chop that book up before throwing it away, so it can never harm the mind of anyone who someday finds it in the trash... |
H: Differential (RS-422) vs current mode signaling in noisy environments
I’m building some automation equipment for my shop that will be used in extreme noise environments (plasma cutting and TIG welding nearby, for example) and I’m trying to come up with the best signaling for connecting modules that is still reasonably cost-effective. The modules will live in a couple of rack cabinets, and have dual-shielded 8P8C modular cabling connecting units within each rack and between cabinets to carry both moderate-frequency (200kHz-4MHz) and near-DC signals.
My initial direction was to use AM26C31/32 RS-422 ICs on these links, for cheap and easy differential communications.
The ICs are rated for +/-7V common mode, and the power supplies are isolated. Therefore, the racks require some mechanism to limit ground potential differences. The RS-422/485 standard recommends terminating the cable shields at both ends through a few hundred ohms to tie the systems together, but if the building ground is continuous there is now a room-sized loop and it's connected to ground through a low impedance. Some of that might be mitigated by inductors on the shield-ground tie but I’m not sure to what extent, and high-frequency high-energy noise could radiate through the stub.
Second, in the case that the line is undriven, such as when a cable is unplugged, the received voltage has an indeterminate state. Therefore, they need fail-safe biasing, which potentially connects uneven power supplies.
I am now considering using a single-ended driver for one wire of each twisted pair, the other wire being a dedicated return tied to ground at the driver but not receiver. Receiving would be done by optocoupler, resulting in full isolation. This has the advantage of ignoring the GPD, removing the need to tie the cabinets strongly to ground and so while the ground loop through the building still exists, it should be better isolated from the ground plane. The other advantage is that logic zero in this system is <5mA or so, and the undriven state of the line is 0mA for a decent noise margin without biasing. Sadly, high-speed optos are expensive.
It’s still technically differential, but the swing is half that of the RS-422 drivers and I’m not sure how much unbalanced drive will affect noise emission at the wavefront vs the opposing magnetic fields of balanced drive. Four pairs share each cable so that noise would be of concern. CAN uses unbalanced drive in voltage mode and is generally considered pretty robust, but it’s not clear how much of that is its fault-tolerant protocol and lower data rates at short range vs RS-485.
I will have to use at least one of each, since off-the-shelf stepper drives use optoisolator inputs, and there is a required RS-485 bus to serve, but RS-485 transceivers are available with features to mitigate the need for biasing networks (though not GPD). So, given the possibility of a 200A welding arc going off a few feet away, what is the most noise-immune thing I can do? Can I solve the problems with RS-422 without going to expensive fully isolated receivers? Can anyone shoot some new holes in my non-EMC-engineer-approved plan? Am I overthinking it and both systems are good enough?
Thanks for any input anyone can provide.
AI: For best possible immunity, I would recommend current loop transmission. It can be shown with some calculation that current loop transmission is superior to voltage transmission by orders of magnitude.
The reason is that an induced error current of e.g. 1mA does not have much influence on a 10mA signal in a current loop, whereas in a voltage loop with e.g. 10kOhms receiver input impedance, this error current would lead to 10V error voltage.
Of course you would still need twisting and shielding of the cables. |
H: Resistor power rating and circuit stability
I've read the following excerpt from a book (Passive Circuit Analysis with LTSpice by Colin May page 96) but am not sure what that means (with no relevant explanation before nor after):
On Fixed Resistors: a useful point to consider if good stability and
reliability are essential is to derate the resistor so that its
temperature is far less than the operating maximum. Thus, if the
actual power is 1/10W, instead of a 1/8W resistor, use 1/2W.
Resistance of a resistor increases with temperature. So if the temperature is far less than the operating maximum, we're lowering the specified resistance of the resistor, and as a result the resistor (under lower temperature) will load itself with a higher current. Under such condition (same voltage and higher current) the resistor dissipates more power - hence it makes sense to choose a resistor with higher Wattage rating (1/2W instead of 1/10W). I don't see any meaning to it other than reminding us to change higher rated resistor when temperature gone low.
How does choosing a resistor with a higher rating (1/2W) has anything to do with testing the stability or reliability of a circuit? And why is it considered "derating" when you switch to 1/2W resistor as you lower the temperature?
AI: Resistance of a resistor increases with temperature.
Well, they try to make them have low or zero temperature coefficient. When thermal stability is required, you can buy resistors with very low temperature coefficients.
So if the temperature is far less than the operating maximum, we're lowering the specified resistance of the resistor, and as a result the resistor (under lower temperature) will load itself with a higher current. Under such condition (same voltage and higher current) the resistor dissipates more power - hence it makes sense to choose a resistor with higher Wattage rating (1/2W instead of 1/10W).
Well, hold on there. The resistance of a resistor will be specified at 25C (room temperature). It is NOT a given that all resistors will go up with temperature. Typically the resistance is specified to have thermal drift of less than 200 ppm per degree C. That could be plus or minus 200 ppm. 200ppm per degree C does not lead to large resistance changes. Let's go through an example.
Suppose you have a 100 Ohm resistor. Let's assume the temperature coefficient actually is + 200 ppm per degree C.
So if it is 100 Ohms at room temp (25 C) what will be the resistance change at 100C? That is an increase of 75 degrees.
75*200/1000000 = 0.015
So the resistance will increase by 100 * 0.015 = 1.5 Ohms.
The main point of that advice, about de-rating your resistors, is that a higher power resistor will have a smaller temperature rise, and therefore a smaller resistance change. That is it. No thermal runaway feedback loop is implied.
When thermal stability is important, you can choose resistors with much lower thermal coefficients. I chose 200ppm because some cheap and readily available resistors are 200 ppm or 100ppm. |
H: Combining different DC power supplies
I intend to serialise two DC power supplies (12V/16A and 36V/1A) to achieve 48V.
Q1: how much current can I draw from such a combination? 1A, 16A, 17A...?
Q2: I understand that grounds must be isolated on the DC side, does this apply to the AC side of the PSUs? (Earth/ground or neutral?)
AI: A1: 1A.
A2: No, regardless of what kind of AC input it has, earthed or not, the DC output on at least one supply must be floating/isolated from ground/earth.
But I would not recommend connecting these in series. If you exceed the 1A limit then the 1A supply might shut down. If one of the supplies is off, the other supply would apply voltage via load to the shut down supply. E.g. if you turn on the 36V supply first and have a high power load or short circuit, almost all of the 36V would be in parallel over the 12V supply, and it would most likely get damaged. |
H: Power amplifier efficiency confusion
Assuming I am considering A single stage class A amplifier, as we know the efficiency of the amplifier is AC power out/quiescent Dc power in, but the problem is when signal is applied shouldn't the formula be ac power out/instantaneous dc power out at that time? Because when the signal is applied the input dc power should be more than the dc quiescent power, so the actual (instantenious)dc input power at that time should be more and that's the original input power, not the quiescent input power should be considered as input power. Isn't it correct?
AI: An old nutmeg..
The DC power into a class A amplifier is voltage x current. Now, if the current is a DC quiescent value superimposed with an AC value (representing the signal) you might think that DC voltage x AC current might factor into things. Well, it doesn't and what you find is that the DC power in is DC voltage in x DC current in. Where the DC current in is the average value of the DC current.
In other words, for a class A amplifier, no matter what signal you have as your input (within reason of course), the DC power taken by the amplifier is constant.
If you overdrive the input and get a nasty distorted waveform shape out then all bets are off. |
H: How individual LEDs in LED strips (or matrixes or cubes) are being powered when supplied with >=12V instead of 5V?
I've been wondering about LED strips and how its individual LEDs are being powered.
My knowledge about LEDs is very basic. I know they're diodes, I know they tend to draw more current than what's good for them, hence current needs to be limited. That happens e.g. via a constant current power supply or a resistor in front of every LED. Former one, as far as I get it, only makes sense if you know exactly how much current is to be drawn, which maps to the need of knowing the exact amount of LEDs to be driven, which is is not the case for LED strips which you can cut almost everywhere.
On top - at least that's what I'm assuming writing this post - individual LEDs need to be supplied with moderate voltages <= 5V, otherwise they might get damaged.
So, there's those WS28xx protocol driving LED strips, cubes, matrixes, .. where you're able to address single (groups of) LEDs individually.
And then there's those "5050" LEDs, which I understand as physical packages for (RGB(W)) LEDs having whatever control logic on top of it (e.g. implementing one of those WS28xx protocols).
There's those famous WS2812B LED strips with 5050 packages containing RGB LEDs where you can't only address every single RGB 5050 package, but also every single physical LED inside (=every color). Those strips can be cut after (almost) every 5050 package and are powered by 5V which makes sense to me, as you can "just" power each physical LED with those 5V (again, limiting current still necessary).
Then there's those WS28xx strips which can be powered by 12V where however only LEDs of groups of three can be individually addressed (which probably doesn't mean the three physical LEDs (=colours) in one 5050 package, but one physical LED (=colour) in three 5050 packages). Which also sounds sane to me, as 12V/3 = 4V which probably is also fine to drive individual LEDs.
Makes me wonder though, how to cut the strips then. According to my understanding I would also only be able to cut them into groups of three, as if the last part would be only a group of two or even one 5050 package, they would receive a too high voltage of 12V/2 = 6V or even 12V/1 = 12V. But at least the dumb (=non-WS28xx) 12V/24V RGB as well as white-only strips I have here I can apparently can cut after each 5050 package.
Then there's the relatively new WS2815 RGB LED strips, which are also powered by 12V, can be cut after each 5050 package, and where every physical LED (as in: every colour within each 5050 package) can be individually addressed without any grouping.
Question:
How do those >= 12V LED strips - which can be cut after each LED / 5050 package - manage to drive the individual (physical) LEDs at non-damaging voltages?
Half-knowledge: I doubt every package has a DC-DC step down converter in it - would be too big / costly. Also, I can't imagine the remaining energy (12V-5V = 8V) just being dispersed as heat (sounds like a horrible waste of energy and I'd also wonder if those strips could even handle the heat). When looking at some (admittedly rather dodgy) datasheets of the new WS2815 strips, I can see for some there's a mention of some 2kHz PWM control as part of the 5050 package. Are those individual physical LEDs maybe PWM'ed by >=12V in a way so they don't get damaged? If not that directly(because it's still >=12V) I could imagine the PWM curve being flattened so it boils down to some reasonable voltage? This only speculation based on half knowledge, though..
AI: how to cut the strips then
Where the label says ;)
Standard 12V LED strips use series strings of LEDs. Since the Vf of a white LED is a little bit above 3V, they simply wire 3 LEDs in series which makes a bit more than 9V. A current setting resistor is added to each series string to soak up the extra voltage.
24V strips use the same arrangement with 6 or 7 white LEDs in series. If they use 7 LEDs instead of 6 you get a little bit more efficiency since less voltage is wasted in the resistor, at the cost of a worse current regulation since current will depend more on voltage.
So if you look at the strip, you'll see one resistor per series string of LEDs.
Now you could say... red or green LEDs use less than 3V so surely we could use more LEDs in series... I've never seen a strip that does that, presumably they use the same PCB for all colors, which means red strips waste a lot more voltage on the resistor.
Individually addressable LEDs use a different scheme.
This is the WS2811 chip which is similar to what you'll find inside a WS2812 LED. As you can see from the schematic, it has three constant current sources (one per color). So it needs a supply voltage equal to at least the highest Vf of all three LEDs, which will be the blue at around 3V, plus whatever voltage the constant current circuit inside the chip requires. And yes, excess voltage is turned into heat. This is a problem for green and red, which have lower Vf, thus more wasted voltage.
Note that LEDs are driven with constant current by a circuit inside the chip, that is switched on and off by PWM. LEDs are not connected to supply voltage directly by a simple FET switch, as your question implies. Basically, if the supply is 5V and the LED needs 3V to get the desired 20mA current, then the circuit in the chip will drop the extra 2V. So you get 3/5 useful power into the LED, and 2/5 wasted as heat.
The illustration above also shows a way to use a higher power supply voltage by wiring more LEDs in series. In this case, you'll no longer get individually addressable pixels since each chip controls 3 RGB LEDs, but you'll get higher efficiency, lower current, and better tolerance to voltage drop along the strip. It could also be cheaper, since it uses less chips. This is what you'll find in those 12V or 24V addressable RGB strips that can only be cut every 3 or 6 LEDs.
Another option is GS8208 ; I believe WS2815 uses the same arrangement:
This one has only one constant current source, and it wires the LEDs in series, with internal FETs that short the LEDs that are not in use. So, one chip can control one RGB LED with 12V and minimum waste of voltage.
If you want white light, then 12V GS8208 will be more efficient than 5V WS2812, but if you only want red or green or blue, it will still use the same amount of current to power a single color LED from 12V, which means all the excess voltage will be turned into heat inside the chip, and it will be less efficient than WS2812... |
H: Ratio on Vbat measurement STM32F303
This question might possibly be answered somewhere, but as I'm googling the particular phrase in the title, I couldn't find anything. I'm fairly new to designing development boards. I use the STM32F303 and came across a certain description in the manual that I don't understand.
I want to connect my battery with the V_Bat pin of the STM32 and I don't understand what is meant by the second line "Ratio on V_Bat measurement: 2". Can please somebody explain this to me with a possible circuit example. Thank you in advance.
AI: It is described in the datasheet, in the battery voltage monitoring chapter.
The battery voltage can be higher than VDDA used for the ADC so it could be out of range, so battery voltage has a voltage divider that divides the voltage by 2 for ADC measurement.
It means that your ADC measurement is half of actual battery voltage. |
H: Is there a current in \$R_1\$ in this precision rectifier if so, where does it go?
So Wikipedia on Precision Rectifier says that :
... when the input is greater than zero, \$D1\$ is off, and \$D2\$ is on, so the output is zero because the other end of \${\displaystyle R_{2}}\$ is connected to the virtual ground and there is no current through \${\displaystyle R_{2}}\$
Now when we see \$R_1\$ it has a potential drop of \$V_{in}\$ across its ends , so a current must flow through it. But then where does it go? I cannot see a closed loop in which current can flow. It can only go through \$D1\$ but then it cannot flow back into op amp.
So my question is : In which path does the current flow? or If there is no current flowing through \$R_1\$ how is there a potential difference across it?
AI: But then where does it go? I cannot see a closed loop in which current
can flow. It can only go through D1 but then it cannot flow back into
op amp.
Sure it can (but not through D1). The op-amp output will be about 0.7 volts negative (in order to preserve the virtual ground at -Vin) and positive current will flow through R1, then through D2 and to the op-amp output: - |
H: Local voltage regulatıon on large board with 12V - 24V power input - is it a good idea?
I am designing a board for my Bachelors thesis. The board takes 12V - 24V as an input. The dimensions are 100mmx100mm. It will be a 4 layer board and has an onboard antenna, multiple sensors and a Raspberry Pi compute module 4 as well. I am thinking of reserving 1 layer as a power rail and 1 layer for GND. My question is:
Should I regulate the 24V voltage into 3.3V & 5V near the VCC, use wide track width and then just use low pass filters near the Vin of ICs?
or
Should I feed 24V into the power rail and regulate the voltage locally at multiple points on the board?
AI: A single regulator for 5 volts and a single regulator for 3.3 volts is all you'll need by the sounds of it. Use your power layer to route 5 volt and 3.3 volt supplies but, try and place those regulators reasonably close to where they are needed. Don't forget to calculate heat dissipation of these regulators if linear types are used. I might consider using a buck regulator to generate the 5 volt and then tee-off 5 volts with a linear regulator for 3.3 volts.
The on-board antenna may require removal of any copper on all layers directly below it.
use wide track width and then just use low pass filters near the Vin
of ICs
Local decoupling capacitors to ground is probably all that is needed. Rarely, something like a ferrite bead or small value series resistor can improve things and a bit more rarely, a series inductor can help. It really depends on what the chips are and what the data sheets say about supply filtering. |
H: Trying to program an STM32 device with ST-LINK_CLI.exe
I have a project developed using STM32IDE, all is well and I want to develop a test script to program the devices in bulk, using the STLINK/V2. To do this I want to invoke ST0LINK_CLI.exe
I can connect to the device and erase it fine like this:
ST-LINK_CLI.exe -ME
When I take the .bin file from my project I can program it onto the device but the chip does not run. In addition if I add the -V option to verify, there is a problem:
C:\my_dir>ST-LINK_CLI.exe -P C:\Temp\my_img.bin -V
STM32 ST-LINK CLI v3.5.0.0
STM32 ST-LINK Command Line Interface
ST-LINK SN: xxxxxxxxxxxxxxxxxxxxxxxxxxxxx
ST-LINK Firmware version: V2J37S7
Connected via SWD.
SWD Frequency = 4000K.
Target voltage = 3.2 V
Connection mode: Normal
Reset mode: Hardware reset
Device ID: 0x413
Device flash Size: 512 Kbytes
Device family: STM32F405xx/F407xx/F415xx/F417xx
Loading file...
Flash Programming:
File : C:\Temp\my_img.bin
Address : 0x00000000
Memory programming...
▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 100%
Memory programming...
No OnChip verification function found. Verify after programming will be performed.
██████████████████████████████████████████████████
Reading and verifying device memory...
▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ 0%
Programming error @: 0x00000188
What's the issue? The bin file should be a memory image, right, it does not need to be loaded at a specific address? I've searched and looked in the docs, I don't find a reason that this would not work.
Many thanks.
AI: OK, solved. The .bin file needs in fact to be loaded at address 0x08000000 then all is well. Correct command is :
ST-LINK_CLI.exe -P C:\Temp\my_code.bin 0x08000000 -V |
H: Unsure of the role of these transistors while reverse engineering a keypad PCB
I am trying to reverse engineer this PCB from a keypad to use it with an Arduino, but my transistor based knowledge is a bit lacking.
I have gotten this far with regards to the architecture of the board; JP1 Pins 1-8 are connected to the bottom pin on TR1 to 8. All the bottom pins of TR1 to 8 are common, and go to Pin A on JP2. The middle pin on the transistor is connected to some of the switches, and the other side of the switches goes to pins 9 to 15.
I have drawn out the scenario for SW13 (In the orange box).
My issue is I don't really know where to go from here.
I can see maybe one of three scenarios;
I apply a voltage to pins A and 4, and monitor the state of pin 10
I apply a voltage to pins A and 10, and monitor the state of pin 4
I apply a voltage to pins 4 and 10, and monitor the state of pin A
I know I will have to scan it like a matrix, but I am fairly confident in how that works once I have things figured out electronically.
Or, I have things completely wrong.
For the sake of completeness, JP3 goes to a daughter board with another button and two LEDs on, and TR 9-13 seem to be LED related.(D1,D2)
The writing on the transistors appears to be 24 0E (or 0E) but I can't seem to find anything for this on google to help me find what type of transistor and what the pinout might be.
Any help on the right direction would be much appreciated.
AI: Chances are the '24' transistors are DTC114ECA prebiased transistors (10K + 10K).
'G1F' is probably BC847B NPN transistor. Unlike those of the prebiased persuasion, those will need base resistors or you'll burn out the transistor if you apply much voltage wrt ground. One of them obviously drives D2 LED and the others probably something similar off of JP3.
It's generally a good idea to put 1K in series with one side of whatever you're probing with.
That should be enough for you to figure out the schematic. |
H: Is it bad practice to always have a MOSFET on?
I am designing a circuit where I have a microcontroller driving
a four-channel logic level N MOSFET module. The simplest way for my circuit to work is to have the microcontroller run "HIGH" on the MOSFET gate for extended period of time (1 hour on for every 10 seconds off). Assuming the heat dissipation is within specifications, is it generally "acceptable" to have a MOSFET on this long? Or would an experienced circuit designer find a way to accomplish the same thing by inverting the logic?
For reference, I am designing an LED neon sign. One of the MOSFETs is meant to "flicker" the entire light. Turning the MOSFET off briefly cuts the power supply. Turning it back on reestablishes the power. The other MOSFETs turn on/off different parts of the light. I only have N channel MOSFETs, I am trying to avoid breadboards/resistors, etc. I believe I am able to dissipate enough heat to avoid a heat sink.
Here is a circuit below. The 100 ohm resistors are just placeholders. The bottom three 3.3 V are the microcontroller I/O pins. The top 12 V is the LED power supply. I will add pull-down resistors to keep the I/O pins and MOSFET gates low.
AI: No, it is not necessarily bad practice to have a MOSFET always on. They are frequently used as power switches.
Since you only have N-channel MOSFETs, you probably want to do it something like this:
simulate this circuit – Schematic created using CircuitLab
This is called low-side switching. You have done that properly with your two elements, but you need to move your main switch to the low side as well. This way your signal inputs are ground referenced. If the sources of your MOSFETs are not at ground you won't be able to activate the gate properly, because how 'on' a MOSFET is depends on the gate to source voltage.
SIG1 turns the whole thing on and off, while SIG2 and SIG3 control the individual segments.
You also will want pull-downs on the gates so they turn off reliably.
Make sure you use logic-level MOSFETs that will be fully turned on by 3.3V (don't use the threshold voltage for this). |
H: How to build a low pass filter with limited resources?
I need to build a low pass filter, to filter frequencies that are over 550Hz. By the cutoff frequency formula fc = 1/2piRC, RC has to be around 1/3500 so that fc is approximately 550Hz. I have a lot of resistors, from 220 ohms to 10k ohms, but my capacitors are ceramic ones that range from 2pF to 100nF. Even with the 100nF capacitor, my resistance would have to be huge in order for the calculations to match approximately 1/3500 (this sentence could be wrong, I'm new to electrical engineering). Is there a way to combine the resistors and capacitors that I have, in order to build a low pass filter that filters frequencies over 550Hz? Do I have to buy bigger capacitors?
AI: 550 Hz = about 3500 rad/s, with a period of 288 us.
With a 100 nF capacitor, you only need 2880 Ω to get that time constant.
10 kΩ and 27 nF gets you there as well. |
H: A computer architecture joke
Background
Some months ago, I watched a Youtube video posted by the channel Numberphile. The video was on famous world mathematician Terence Tao, and in the video, he seemed to be struggling with speech. I don't know if he has any speech impediments or a stutter, but there were several comments under the video stating that it was because his brain moves too fast for his mouth to catch up. However this one in particular caught my eye:
His brain 240 FPS, his mouth 60 hz
to which I responded:
FPS and hz aren’t comparable mate
Now, it is my understanding, in my experience taking computer architecture courses at university, that Hertz are cycles per second, and fps frames rendered per second, aren't directly comparable units. And in fact, for the joke to have been more effective in my opinion the units in the joke should've been reversed (although exaggerated) to be:
His Brain 240 ghz, his mouth 60fps
as I assume the initial joke was inferring that Tao's brain is a fast cpu while his mouth is slow renderer. I got some serious insults and backlash my way as a result of making this comment, so much that I contemplated the validity of my university computer science education.
So if there's any experts out there on this topic, can you please confirm or deny if the initial joke was well stated, and makes sense from a computer architecture point of view, and if I am in fact the idiot on youtube. The link to the video is: https://www.youtube.com/watch?v=MXJ-zpJeY3E&t=84s&ab_channel=Numberphile and the joke is the second comment. Thanks, I just want to be educated lol.
AI: Hz and fps are both cyclical rates, however hz is a rate (\$\large\frac{1}{s}\$) and fps is in units of frames (\$\large\frac{f}{s}\$), i.e. a symbol rate that can be expressed as bps.
Sometimes a measure or count of (system clock) cycles is implied by hertz, especially in informal discussion such as a joke. |
H: How to prevent a Lipo/Li-Ion-Battery from catching fire after a complete self discharge with a following accidentally recharge
From the following Youtube video (at 3:15min) I got that it is dangerous to recharge a Lipo/Li-Ion-Battery after it was completely discharged (0V), because it will catch fire.
https://youtu.be/1rg3ZWxBNUE?t=194
I'm using the following circuit. It cuts off the load at 2,5V, but then there is still the self discharge, which leads the voltage down to 0V volts by the time:
Is there a common or established way to prevent recharging a completely discharged battery, which is fitting to the circuit from above?
AI: When a lithium-ion battery is overcharged, copper from the anode current collecter can become dissolved and move through the cell shorting or providing a low impedance path from anode to cathode.
If you were try to charge a battery in that condition with the normal charge current, the battery would likely overheat and enter thermal runaway.
The condition is somewhat reversible, but the charger has to be able to sense and recognize it, and enter a low-current precharge phase until the copper is re-deposited on the anode's current collector, and the cell voltage rises to a point where it's safe to increase the charge current.
Of course there will be some permanent degradation of the cell at that point.
Some systems support that type of feature, but if they do not, as in the case of yours due to your protector, they will just refuse to charge the cell. (Which is probably the safest way to go.) |
H: Altium - Use Net Tie on a Plane Layer
Is it possible to use a net tie on a plane layer? I know how to use a net tie to connect two different polygon pours together. But how would one go about this on a plane layer -- which is a negative layer?
AI: The best alternative I managed to come up with so far is to convert the plane layers into signal layers in the Stackup Manager then use ordinary polygon pours on the new signal layers instead of split planes. The only inconvenience this will cause is that one has to remember not to leave any areas un-poured by accident. |
H: Diagnosing water heater error - Need help with diagram
I'm diagnosing an error in a water heater. The error code indicates that the error is related to the connectors X2. Between both connectors X2 there is this Symbol "/" with the number 5. I have marked it with slight yellow. What does this symbol indicate?
AI: The slash indicates that the schematic line indicates a bus of five wires, rather than just a single wire. For the /5 example you call out, it means that all five pins on the X2 connector of item 1 are connected to the corresponding five pins of the X2 connector (item 13). If connector X2 is implicated in a troubleshooting guide, you may want to use a multimeter to ensure that all five pins of the cable still have continuity from one end to the other, and there are no short circuits between adjacent pins. This is in addition to any further measurements (e.g. of voltages on those pins) that the manual recommends.
Likewise, item 2 presumably has a single three-pin cable going to the three-pin connector X10 (item 12).
The /1 going to item 3 is a bit of an oddball - there's a single wire, and it's specifically called out as a single wire. The return for that signal might be chassis ground (given the dotted lines between that wire and ground on/near the X7 connector). |
H: If you have a MOSFET in series with a thyristor, can turning the MOSFET off cause the thyristor to turn off?
This is a purely theoretical question. The values in the circuit below do not matter, and I understand the circuit is completely useless.
Consider if SCR1 and M1 are both turned on so that current is flowing through the resistor. If you turn off the MOSFET, will the thyristor turn off? If so, how long will it take, assuming it has a typical turn-off time of \$ \text{T}_q\$. Thanks a lot for the help!
simulate this circuit – Schematic created using CircuitLab
AI: If turning the mosfet off causesthe current through the scr to drop below its holding current, then yes, the scr will turn off. The mosfet could also be in parallel with the scr and when turning the mosfet on would cause the current through the scr to drop below the holding current.
How long would it take? Maybe a couple of microseconds. Depends on the specs of the scr.[edit] according to the Motorola datasheet for the C106 scr, turn off time is around 40us. |
H: What inductors are used to reduce coil whine?
I was wondering about this after two years of heavy coil whine from a pS4Pro power supply. It looks like the worst designed PSU ever made, and I'm aware some modern ATX PSUs have reduced coil whine to close to zero by using inductors.
What kind of inductors are these and how they are placed on the board I wonder?
AI: It's probably not the type of inductors that has changed much over the last 10 years or so that has reduced whining, but the higher frequencies used now, well above the audio range.
Higher frequencies in switching power supplies allow for smaller components, using less PCB space, and the trend has been in that direction, despite possible higher switching losses. Making them "silent" is a happy side effect.
Also, reducing inductor noise isn't a design criterium for all manufacturers; perhaps some do this better than others.
Apart from that, some physical form factors of inductors do whine more than others, and their mounting plays a role as well. I don't know if there are Special Super Silenttm inductors out there, though; I never noticed any advertised like that in suppliers' catalogues.
BTW if it really is driving you crazy, try applying a dab of glue or nail polish to the inductor in a few places and let dry; it tends to silence them.
Note that capacitors can also make a sound all on their own, especially ceramic ones (piezo-electric effect), so the inductor (magnetic field, magnetostriction) isn't necessarily the (only) source of the whine. You can read more about that here: What might be the cause of high pitch sound coming from a switching regulator circuit? |
H: How does the resistance-capacitance oscillator shift voltage by 180 degrees?
I'm trying to wrap my head around how the resistance-capacitance oscillator shifts voltage by 180 degrees.
As far as understand, a capacitor shifts phase for current, not voltage.
What am I missing here?
AI: You are correct that a capacitor "shifts phase for current, not voltage" or more accurately, the current in a capacitor is out of phase with the voltage. If the capcitor were ideal, the phase difference would be \$90^{\circ}\$.
However, the phase shift network in an RC phase shift oscillator does not consist only of capacitors, but also resistors.
If a voltage sine wave is applied to a capacitor and resistor connected in series, and an output voltage is measured across the resistor, there is a frequency dependant phase shift between the voltage across the resistor as compared to the voltage across the both components together. That phase shift, (for ideal components), lies between \$0^{\circ}\$ and \$90^{\circ}\$.
The reason for this voltage phase shift is that the current through the capacitor is out of phase with the voltage across it. This causes the current through the resistor to be out of phase with the voltage across both components. In turn, the current the resistor is in phase with the voltage across the resistor.
In your diagram, you have 3 RC phase shifters connected in series. Typically, phase shift networks are separated by amplifiers or buffers of some sort. The reason is that if the current through the capacitor is shared between a resistor and further circuitry, that further circuitry has an effect upon how the phase shift works. The further circuitry "loads" the phase shift network.
If the phase shift networks were identical, and separated by amplifiying or buffering circuitry, at the frequency where each network shifts the phase by \$60^{\circ}\$, their combined effect will be to shift the phase by \$180^{\circ}\$, providing an appropriate phase shift for an oscillator. |
H: Why Doesn't the Current in a Transformer Diverge?
I just started learning about mutual inductance and transformers, and I haven't been able to find the answer to this question. I suspect it's really basic, and that I'm just missing some understanding.
Say I have 2 coils, A and B, both wrapped around a common volume, and the two current directions denoted -> and <-.
If I quickly increase the -> current in A, then this induces a large voltage in B, causing a <- current.
As the current in B now increases, it will again induce a voltage and -> current in A. But the current in A was already going in the -> direction, so it looks like it just got a boost which obviously violates conservation of energy. Where did I go wrong?
AI: Lenz's Law states that the direction of the electric current which is induced in a conductor by a changing magnetic field is such that the magnetic field created by the induced current opposes the initial changing magnetic field.
So, rather than causing a uncontrolled "positive feedback loop", growing with out bound, the currents in the coils of a transformer form a "negative feedback loop", which creates a stable, finite, output.
Edit:
The way transformer action is often explained by considering components of the transformer current.
First, assume that there is no load on the secondary. It is open circuit. That makes the primary a simple inductor. If the inductor were ideal, the current through the inductor would be \$90^{\circ}\$ degrees out of phase with the applied voltage. This current is called the excitation current, or magnitization current. This magnitization current creates flux in the core. The changing voltage causes changing current, which causes changing flux, which induces a back emf (voltage) across the primary, which exactly matches the applied voltage.
Even though, at this point, there is no load on the secondary, the secondary does enclose a changing flux. So, (again ignoring resistance, and leakage inductance and other complicating factors) the changing flux that induces an emf in the primary also induces an emf in the secondary. and the ratio between the two emf's is set by the turns ratio. That is, the voltage on the secondary tracks the voltage on the primary by a constant factor.
Now, when we add a load, things get more complicated. But we will consider this magnitization current as a component of whatever current may happen to exist in the primary. It will, independent of whatever else happens, cause a flux in the core, a changing flux, that will induce a voltage in both the primary and secondary coils, and these induced voltages will be instantaneously proportional to each other.
So what happens when we add a load to the secondary? Let's assume it is a resistive load. Since there is a voltage on the secondary, the resistive load will draw an instantaneous current which is proportional to the instantaneous voltage. That current, will cause a proportional replica of itself in the primary which is a separate component in the primary to the magnitization or excitation current. The load current in the secondary and the current it causes to exist in the primary above and beyond (and out of phase with) the excitation current work in opposite directions as far as creating flux goes. In combination, (again assuming ideality) they do not add any flux at all to the transformer core. The flux is completely determined by the magnitization current, rather than by the load current.
This explanation may leave you with more questions, but it will perhaps allow you to focus more clearly upon what those questions are. |
H: Mesh current formulas gives two different results for i2. Why does the first loop provide an incorrect current value?
Mesh current formulas gives two different results for i2. The formula derived from the second loop provides a current of 1A which is the correct answer. However, when I write my equation using the first loop I get a current of 5A. Why does the first loop provide an incorrect current value? My work is below. I also provided a screen shot of multisim verifying the correct i2 value should be 1A.
Multisim verifying i2=1A
AI: There is voltage across the current source. Therefore, your KVL equation for the first loop is not correct.
It is logical to find this voltage with the solved \$i_2\$ in your second loop,
\$V_{2A}=12 + 4\times (i_1-i_2)=12 + 4\times 1=16 \ \rm V\$ |
H: Figuring out pin 1 on this IC
I have this ic:
https://www.ti.com/product/SN74CBT3251
However, if you look at the actual packaging, there is not a circle, in order to figure out pin 1.
This is a picture of the IC:
Can someone give me a tip, on how to identify which is pin 1?
AI: I am sure the other answers are probably right. But the package has a subtle asymmetry. One side is beveled slightly. That side is the side where pin 1 will be found. See picture (annotated by me, but from datasheet linked in question). |
H: Microstrip Transmission line characteristics impedance
The question is from GATE 2016-ECE-paper 2.
Here in the question, we need to comment on the characteristics impedance of Microstrip line and we are given 4 options: The ANSWER GIVEN IS (B),
My Approach for the problem is as follows:
Since the width of the microstrip line is smaller than the grounded plate, there will be some leakage of the Electric field to the air from the microstrip line, i.e. not all the field lines are bounded between the dielectric material and the microstrip line.
Therefore effective permittivity (\$ \epsilon_{eff} \$) of the transmission line will be less than (\$ \epsilon_{r} \$)[ dielectric] but more than \$ \epsilon_0 \$ [air].
The characteristic impedance of the lossless line is given by \$ Z_0 \$ = \$ \sqrt{\frac{L}{C}} \$, where L is the inductance per unit length and C is the capacitance per unit length.
Also, Capacitance= \$ (\frac{\epsilon_{eff}*Area} {length}) \$
Hence the characteristic impedance will be more than the impedance, I.e My Answer is Coming as option (A)
So where I get wrong, Since the answer given is option (B)?
AI: not all the field lines are bounded between the dielectric material and the microstrip line.
This is true. However, the formula \$C=\frac{\varepsilon_0\varepsilon_R A}{t}\$ applies to the case of quasi-infinite plates. That is, the case where all the field lines are strictly vertical from one plate to the other. This formula neglects fringing fields altogether.
The field lines that "escape" the dielectric into the air are part of the fringing field, not accounted for in the parallel plate capacitance formula. Thus they (and the other fringing fields that are confined to the dielectric) can only add to the capacitance, not subtract from it. |
H: High side switch with N-FET
I know nothing about MOSFET, yet, so I'm fumbling in the dark here.
What I want to achieve is a high side switch using an N-FET. All guides I've been reading says "low side switch = NPN or N-FET", "high side = PNP or P-FET". I want to control a very small 3.3V-load (50mA) using a microcontroller (5V), and since a BJT has about 200mV for Vcesat I thought that a FET might be better to get rid of that drop.
I setup a simulation just to test the idea, and I can see 3.3V on the source pin when the gate is high. But obviously, current cannot flow in the direction I want. Is it possible to make this work with an N-FET, or do I need to bite the bullet and buy a P-FET?
EDIT: datasheet for the FET in question. ON-characteristics for Vgs(th) says "max 1.5V". 5-3.3 = 1.7V in my case, shouldn't that be enough?
AI: So if you are just turning an LED on and off with 20mA, I know I would use an NPN transistor (2N3904 or 2N2222 type). But a MOSFET will certainly do the job just as well. The BSS138 as a low side switch will work fine for an LED. I do it all the time as do many other designers.
However, if you are enabling power to other circuitry, low side switching gets very tricky, because once you do low side switching you no longer have a single ground plane. I strongly recommend you avoid low side switching if you are trying to control power to other IC's that you are communicating with.
In your case, since you have a 5V VCC available to switch a 3.3V rail, you actually can use a high-side NMOS without any special additional circuitry. But you need to find one that will fully turn on with Vgs of 1.7V. And remember, 1.7V is the nominal voltage. If your 3.3V rail is running a little high and your 5V rail is running a little low, you can find yourself with a lot less than 1.7V left to turn on the FET.
The Vgs(th) rating of the FET is not what you should be looking at when you are using the FET as a switch. You should note what is the lowest voltage at which Rds(on) is specified. In your case, you want a low Rds(on) to be listed in the datasheet when Vgs is 1.5V or even less if possible.
Purely as an example (shopping questions are off-topic here) I did find one such FET that might work for you. It is the Diodes Inc part number DMN1019USN-13. Rds(on) is specified as 41mOhms max at Vgs=1.2V. So that should work for you.
A word of caution: for MOSFET's with such low switching thresholds, you need to make sure that you have it turned off when you want it off. Any little voltage may actually cause the channel to start conducting a bit. In your case that should be no problem because when you drive the gate to ground, Vgs will be negative. But others who may be reading this may be in a slightly different situation. |
H: Is interleaving used in MIMO?
Correct me if I'm wrong, this is my understanding of MIMO:
Is this how MIMO is able to increase data rate?
AI: The colouring of those data bits is figurative. It's far, far more complicated than that. That diagram is what's known known as a Lie to Children. It's the sort of diagram that would be shown in a presentation to politicians when trying to get them to license spectrum to your organisation.
Does each parallel data stream carry some of the data?
Yes
Is the data stream taken apart in some way before sending out on the MIMO streams and then reassembled afterwards?
Yes
Is the mean data rate on each parallel stream lower than the mean data rate on the original and recovered streams?
Yes
Does MIMO take 1 bit at a time and multiplex them strictly between the four streams?
Absolutely not.
I fell foul of a lie to children when I was a child. My shape-sorter I used when I was pre-school had nice regular polygonal prisms to bash through holes, a square, a circle and a regular triangle. Later on in my maths lessons, I insisted that a triangle had three angles of 60°. I was convinced I was right, I'd seen it in my shape sorter. I would not believe the teachers who told me that a triangle could have different angles. I even had stand up arguments with them in class, at the age of perhaps 8. It inhibited my learning geometry.
Those one bit per channel pictures are the nice easy 60° angles of a kid's shape-sorter. But we have isosceles triangles, right triangles and scalene triangles, which correspond to the several layers of message encoding, packing, forward error correction, HARQ processing, matrixing that is not shown on this diagram, and which certainly does not work one bit at a time, which the word 'interleaving' seems to imply. These work in blocks or chunks of data. That is needed to ensure correct message transmission over a medium that isn't 4 channels for 4 antenna at the best of times, varies, and is unreliable. With 4 antennae you might get approaching 4 streams in a lab environment, 3 is good for out in the field, and you can generally expect more than 1.
Looking at your diagram, one of the lies is that the red antenna sends a signal to the red receiver and so on. That's the same sort of level of incorrect as the input stream being taken a bit at a time to each channel. Each TX antenna sends signals to all the RX antennae. The data is matrixed onto all the transmit streams, and is deconvolved at the receive end. The logically distinct channels are created mathematically from the different views of the single channel that each antenna has. See this Q&A for a bit more detail on how the RF channel is handled to create multiple logical channels. |
H: Capacitive dropper power supply for LED - zener/decoupling capacitor?
Let's say you have a standard capacitive dropper power supply like the following (from Wikipedia):
I understand that these work well for driving LEDs since they act like a constant current source. I was wondering if the design could be (safely) simplified further by removing the zener and decoupling capacitor - the LEDs don't care what voltage they "see" as they're current-limited anyway, and human eyes cannot detect 120Hz flicker.
I mentioned this to a friend of mine and he pointed out that the mains is subject to voltage transients from things like lightning and inductive loads switching off, and that C2 would shunt the majority of these transients away, protecting the LEDs - that seems to make sense, I suppose C2 + the input impedance of the circuit would behave like an RC filter during a transient?
If that's the case, what purpose does the zener serve if voltage regulation isn't important, and why do so many commercial designs seem to include one? Simulating a similar circuit with a 50us/1kV spike (a la IEC 61000-4-5), the LEDs only see a small increase in voltage and current - C2 does limit the transient significantly...but if C2 were gone, is a sub-millisecond overcurrent even that bad anyway?
AI: I understand that these work well for driving LEDs since they act like a constant current source. I was wondering if the design could be (safely) simplified further by removing the zener and decoupling capacitor
There are plenty of cheap LED lights just like this. They're all unusable for anything except indicator lights because they flicker at 120Hz though.
In this design R1 is not technically a protective element—it's just for inrush limiting
R1 should be a fusible resistor, which is cheaper in mass production than a resistor and a fuse. These resistors are designed to fail open just like a fuse.
R2 should be specified to withstand the voltage, which probably means several resistors in series.
mains is subject to voltage transients from things like lightning and inductive loads switching off, and that C2 would shunt the majority of these transients away, protecting the LEDs - that seems to make sense, I suppose C2 + the input impedance of the circuit would behave like an RC filter during a transient?
If the transient is fast enough, that depends mostly on C2's ESR. LEDs can also tolerate surprisingly high non-repetitive peak currents.
If that's the case, what purpose does the zener serve if voltage regulation isn't important
For LED lights, I don't see any purpose for it.
When a component absorbs a transient, the transient energy is converted into heat. So its ability to absorb a transient's energy comes from thermal mass of the part of the component where dissipation occurs. A tiny semiconductor junction can absorb much less energy than a bulky voltage-dependent resistor, for example. Unless the zener is much bulkier than the LEDs and rated for high current, it won't have much more transient handling capability. |
H: Isn't cosine an even function?
I changed the question as I was not clear.
Is the following true?
cos(+ωt-π/2) = cos(-ωt-π/2)
No, as I tried random angles for ωt as shown below
So, yes Cosine is an even function but the whole angle should be negated and not only the phase shift part right?
AI: The answer to your new question is no.
You made a sign mistake, thinking that -(wt+π/2)=(wt+π/2), which is incorrect.
The answer to the question you originally asked, which is basically:
does cos(-wt-π/2) = cos(wt+π/2)
is yes. |
H: High current solid state battery disconnect
I need to design a solid state switch to disconnect a battery (28V) from a load that can go to a maximum of 70 Amps in a 40C environnement.
An SSR like the Crydom D06D100 could handle this using a very large heatsink and I suppose it would be the same for whichever big MOSFET I find.
How do the following products achieve this objective without using such a big form factor?
https://www.modernracing.net/product/mr-solid-state-battery-disconnect/
https://perfectswitch.com/dc-solid-state-relays/uni-directional-dc-relays/
https://verticalpower.com/index.php/products/pps
AI: Probably all those devices have to be mounted on a plate or heatsink. The heat generated due conduction loss is proportional to I²R, where R is the on resistance of the MOSFET. For large current it is better to put in parallel multiple MOSFETS that have a good thermal conduction to a PCB with thick copper layers.
A special charge pump is needed for driving N-MOSFETS. An example is an ideal diode controller like LTC4358 |
H: Signed or unsigned distance in electric potential due to point charge(s)
I am reading about the electric potential:
from wikipedia:
The electric potential (...) is the amount of work energy needed to move a unit of electric charge (a Coulomb) from a reference point to the specific point in an electric field ...
Consider then an electric potential due to a point-charge \$Q\$:
I would assume that the potential should be positive whenever our (unit)test-charge and the point-charge repel each other, and negative when they attract each other. Since our test-charge is fixed to be \$1C\$, we only need to consider the charge \$Q\$, and our electric potential will be proportional to this charge.
Let's say we consider the potential \$ \varphi_0 \$ at the origin. Whether \$Q\$ lies at \$\left(1,0,0 \right)\$, or at \$\left(-1,0,0\right)\$ shouldn't matter since we only consider attraction or repulsion. Also I see many formulas just considering \$r\$ which to me doesn't look like there is a sign involved.
Now here is where my problem starts:
Consider now two point charges \$Q_1\$ at \$\left(-1,0,0 \right)\$ and \$Q_2\$ at \$\left(1,0,0 \right)\$.
Let \$Q_1 = Q_2\$
Wikipedia also explains how to calculate the potential due to multiple point-charges, namely by just adding up the individual potentials due to one point charge.
Now if we consider again the potential \$\varphi_0\$ at the origin, we would have:
$$ \varphi_0 = \frac{1}{4\pi\epsilon_0} \left(\frac{Q}{1} + \frac{Q}{1} \right) = \frac{2Q}{4\pi\epsilon_0} $$
Thus we have double the potential.
Now this is not what I expect.
Since the point-charges lie on opposite sides of the origin, I would very much expect them to cancel each other out when calculating the potential at the origin. Thus I would assume using a signed distance (\$1\$ and \$-1\$ in this case) to be the correct approach, but then the previous logic of the sign of the potential signaling repulsion or attraction would not work anymore.
Where has my reasoning gone astray? Where am I making wrong assumptions? Where am I misreading/misunderstanding the wikipedia-explanations?
AI: I would assume that the potential should be positive whenever our (unit)test-charge and the point-charge repel each other, and negative when they attract each other.
This assumption is true for system of two charges of which one of them is +ve test charge but if there are more charges in the system you can still apply that logic but taking two charges at a time but not for system of charges altogether and for that you need to know about superposition theorem.
In Superposition theorem(in this case) ,you can remove all the charges from system of charges except test and one charge(Q1) from the system and calculate potential due to that one remaining charge (and you can apply attraction-repulsion logic here because there is only two charges .
And repeat above process again but with test charge and other charge (Q2) of the system and in this manner calculate potential due to all the charges and then add them ,the net potential will the potential due to system of charges
Thus I would assume using a signed distance (1 and −1 in this case) to be the correct approach
This is wrong ! Distance (r) is always taken as positive but sign of charges (Q1,Q2 etc ) is important .
Since the point-charges lie on opposite sides of the origin, I would very much expect them to cancel each other out when calculating the potential at the origin
Why you expecting that potential should cancel each other although I explained above why this is not the case!
previous logic of the sign of the potential signaling repulsion or attraction would not work anymore.
As I explained earlier it still works if you use Superposition theorem but not for system of charges all taken together |
H: Why does the voltage drop when I connect an 8 ohms 1 watt speaker, specifically the source to the circuit is GPIO pin output voltage?
This circuit I am using to drive a speaker is acting strangely.
Case 1: When I connect the 3.3 volts source from the Nucleo board it works fine as expected (in terms of the sound.) When I check the voltage across source it is 3.1 to 3 volts. A drop of only .2 to .3 volts.
Case 2: When I connect the GPIO output pin 3.2 volts as a source to the circuit, the speaker sounds very low. When I check the voltage at the GPIO pin it is 2.1 to 2 volts. It drops nearly 1 volt.
Why is it dropping so much voltage? Due to this speaker does not sound as expected.
What is the issue? Can I resolve this?
Edit: The GPIO is from a BlueNRG-1 chip.
AI: You need to do something like this:
simulate this circuit – Schematic created using CircuitLab
That delivers power to the speaker as needed, but allows you to turn on the BT66 (which seems to be an FM radio receiver) with the signal from your GPIO pin.
As has already been noted, IO pins from microprocessors aren't intended to provide power. They are control or data signals, and cannot provide high current.
The above sketch assumes that the BT66 is powered from something appropriate. You didn't show the power connections for it in your sketches, and I didn't make an effort to find a datasheet or other description of it. |
H: Is it required that a capacitor is marked X2?
I want to replace a capacitor in a power supply.
The original is marked: 250V, 0.1, WIMA MP 3. I bought one that says: ERO MKT 1822 0,1 250V. What bothers me is the safety rating X. Is the original one X2 ? What is MP 3? Do I need to buy another one that explicitly says X2 on it? I want to stay on the safe side.
AI: If the Wima MP 3 is this one (or similar): -
And it has "X2" written on it... then for the sake of peace of mind, choose a capacitor that is also rated as "X2" or better.
I don't see "X2" in the data sheet for the MKT1822 device so, dig a little deeper and find one that you are confident about. |
H: Sophisticated order to connect the poles for a jump start of the car battery
How to connect or jump start a car battery. So in what order to connect the poles and why?
I read so much (different) explanations and I would like to have a final generally accepted/recognized answer to mark this question.
This is (also) related to human safety.
AI: Positives first (any order.)
Negative pole to battery negative pole (only on one battery.)
Negative pole to chassis of the car for the remaining battery.
Disconnect the negative pole from the chassis first when you are finished.
You connect the positives first because if you connect the negatives first (the chassis on most vehicles,) then you run a risk of shorting out the batteries if you bump the chassis with the positive cable. If the chassis aren't connected to each other, then it is much safer to connect the positive cables.
You connect the final negative to the chassis because it will spark when you connect it - that closes the circuit. A spark can set off hydrogen gas that collects around batteries when they charge or discharge.
You disconnect the negative cable from the chassis for the same reason you connected to the chassis last - it will spark, and you don't want to set off any hydrogen gas that may have accumulated around the batteries during the jump start.
The electrons don't care in the slightest in which order you make the connections, or if one connection is to the chassis.
It is "only" a matter of safety. Do it right and everyone is happy. Do it wrong, and you might slag or blow up a battery.
The above applies to the usual negative ground to chassis electrical systems used in modern cars. Older vehicles might use a positive ground, in which case you need to follow a different sequence.
Check the owner's manual before you try to jump start a car - some may say to not use a jump start at all. |
H: Start an ac simulation at f=0Hz
I would like to know please why is it that I can't start an .ac LTSpice simulation at f= 0Hz. I would like to know the DC impedance (or the equivalent resistance of my circuit that also contains inductors and capacitors) of my system. Any workarounds?
Thanks in advance :-)
AI: I guess I've found the answer minutes after posting the question.
Although I don't know yet why LTSpice does not allow a 0Hz .ac simulation, I know how to extract the DC impedance of my circuit (or as others may call it: the resistance). I basically run my .ac simulation from 1Hz to my maximum frequency and at LF (low frequencies) get a constant value which is basically the equivalent resistance of my system :)
Edit:
Also, after the clarifications of @ Bimpelrekkie, the .ac simulation does not run at 0Hz because "it is a simulation of a linearized model of the circuit. The model of the circuit can change with the DC ( = 0 Hz) voltages and currents of the circuit, for example, the behavior of a transistor based amplifier can change depending on the DC biasing current. The linearized model is made for that DC current. Simulating at 0 Hz does not make sense then as that would change the model."
Hope I was clear enough for other users. |
H: What is the symbol on Ardunio Uno schematic?
I found the symbol:
It is available on:
https://www.arduino.cc/en/uploads/Main/arduino-uno-schematic.pdf
What is it?
AI: Probably solder pad jumpers - you should be able to apply a "blob" of solder across pins 1 and 2 to produce a short circuit hence, the one you show might give you the option of adding an external capacitor to a reset pin.
But if you really want to be sure, open the EAGLE PCB file associated with this EAGLE schematic and look and see. |
H: The most efficient way to generate -5V out of 5V3 at 30mA
In my design, I need to power up a few op-amps with positive and negative rail. Taking into account amplitude in the signal path, headroom for input common-mode voltage, output voltage swing and voltage drop for LDOs on each rail I need at least 5V and -5V. Generating 5V out of 5V3 is the easy part. Generating -5V proves more difficult. Considering load current, I cannot use voltage multiplier or switch cap approach. The only possible solution I have at the moment is to use isolated DCDC converter (“dual-rail” schematic in the picture attached). Is there any other approach I could use?
AI: It's not really complicated to generate -5V out of a positive rail; as you've noted, inverting converters are a thing!
So, a non-isolated inverting converter is the way to go – these aren't inherently any less current-capable than an isolated converter, so there's something about your presumption that it'd be better to use an isolated converter that's not quite right.
Also, "a couple of opamps" doesn't sound like a lot of load at all, so probably pretty compact (and easy!) designs would do.
You can try out a lot of vendor tools for DC/DC converter design. I like TI's webench. I just hacked in Vin= 5.0 to 5.3 V, Vout = -5 V, Iout = 0.5 A, and got this within seconds (along with a few other proposed designs): |
H: Current limiter circuit is damaging MOSFETs
I have a circuit where a charged capacitor must be discharged over a load resistance. The trigger switch here is a 10A relay so, to avoid damaging it when the load resistance is too low, I decided to add a current limiter circuit between the capacitor and the load. This is the circuit (everything between the right button and the LOAD):
As you can see, the capacitor is charged with 160V, taken from a previous step-up circuit (not showed here). The RSENSE value of \$0.1 \Omega\$ should limit the output current to 7A (\$0.7V/0.1\Omega = 7A\$). Considering the 160V input, the current limiter circuit kicks in if the load resistance is inferior to: \$160V/7A \simeq 22.8 \Omega\$
I ran the simulation and everything seemed fine. Then I assembled it for real and ran some tests:
Using a \$100\Omega\$ load (maximum output current: \$160V/100\Omega = 0.16A\$, inferior to the limit) I got the following discharge curve:
All fine here. Next test I used a \$10\Omega\$ resistor. With the limiter actuating, the output voltage should be capped to \$10\Omega \times 7A = 70V\$. I got this curve:
The limit was a bit above 50V which implies the current is being limited to 5A but that is something I can fine tune later tweaking with RSENSE. I repeated that test several times with the same outcome and no problems whatsoever.
Next test, I used a \$1.2\Omega\$ load, and this happened:
As you can see everything went fine for a moment, with the output voltage capped to 7V (\$1.2\Omega \times 5A\$), than the MOSFET shorted and the rest of the capacitor charge flowed without any limits. Further tested the MOSFET and confirmed the damage.
So, what caused this to happened? I started to seek for possible culprits and developed some hypothesis:
1 - Overcurrent inside the MOSFET.
That doesn't really make sense because the current would be limited to 7A (5A for real) no matter what and the MOSFETs are rated to 9A of sustained current (32A pulsed). Anyway, I tried to use two MOSFETs in parallel and what I got was that one of the two MOSFETs always ended up damaged (probably the "weakest" one).
2 - Gate to Source overvoltage.
According to the simulations, that never exceeds 10V but I decided to add that 15V zener diode for protection, anyway. Same result as before.
3 - Drain to Source overvoltage.
Those MOSFETs are rated at 200V, so that should not be a problem but as this is a parameter that stretches out when we drop the load resistance, I decided to try the following configuration using two MOSFETs in series, so each one would receive half the voltage:
The result: now I got two MOSFETs damaged at once!
So, those are the questions I'd like to ask you guys. What´s is going on here? Why does it fail only when the resistance is that low? What would solve that problem?
Thanks in advance.
AI: Safe operating area. Repeat after me "look at the safe operating area graph": -
According to the data sheet SOA graph for the IRF630 it would survive less than 1 ms with 5 amps and 160 volts across it. Looks like it managed to survive 3 ms though: -
What´s is going on here?
It's called operating the device below its zero temperature coefficient point. The ZTC is best shown in this graph of the IRF630: -
I've taken the liberty of adding a few bits in red. In particular, I draw your attention the the red circle; the ZTC point. At gate voltages above 6.5 volts, if the device warms a little then the drain current falls. This is inherently safe. If the gate voltage is below 6.5 volts (say 5.75 volts) then if the device warms the drain current rises from 5 amps to 6 amps pretty quickly as the device hits a junction temperature of 150 °C but it won't stop getting warmer. If the picture above had graphs at higher junction temperatures, you would see higher drain currents and, now, you should be able to see that this spirals out of control. It can happen in less than 1 ms with certainty. |
H: Which Wave is leading?
In the figure below I think that the Purple wave leads the pink by 135 degrees. The reason is that at the zero crossing the purple wave crossed the x-axis to be in the positive y-axis whereas the pink is in the negative y-axis.
Right?
AI: There is more than one right answer to this question. The purple wave could be leading the blue one by 135˚ or it could be lagging behind by 225˚. The blue curve is just a standard sine wave, y = sin(x), and the purple curve could be described as either y = sin(x - 135) or y = sin(x + 225) because the two produce the same curve, due to the fact that sine has a period of 360˚. |
H: Is connecting a generator to the line input of an audio amplifier OK?
Disclaimer: I'm computer scientist
I have a sinus wave generator (handyscope H3-5) from TiePie but the output power is insufficient for my application. The specification is here. I need at least 60W. I would like to know if it is OK to connect the generator to the input line of an audio amplifier. I need only a signal below 20kHz.
The amplifier is an SMSL SA 98E. I only have a french data sheet. The data sheet states that the input line is 400mV-2V.
I'm not sure what it means.
Also I know that the generator is 50Ohms. Do I have to care about that ?
I plan to buy an RCA<->BNC cable. These are sold for video connections. I think they are 70Ohms. Would it be OK ?
AI: The data sheet states that the input line is 400mV-2V.
That is probably OK; I'd assume you can easily set your output to that amplitude range.
Also I know that the generator is 50Ohms. Do I have to care about that ?
Not in this case: the input impedance of an audio amplifier will be very high, so that the source doesn't have to drive much load (i.e. be able to supply much curent).
And, at 20 kHz you don't have to care about wave impedances etc: your cable is much, much smaller than the wavelength of a 20 kHz electromagnetic wave.
So, this will work.
Note that your generator is absolutely overkill.
A sound card would do just as well – these are audio frequencies, after all!
Also, everything but the worst sound cards have astonishingly high fidelity, so that's often an attractive choice. You'll want to be a little careful when using a sound card that it has (or you add) good anti-imaging filtering (i.e. to suppress harmonics that arise from the discrete-time sampling). |
H: LEDs keep dying in 12v circuit with powerful electromagnet
I have a circuit like this:
12V/2A DC input. A small blue LED with a 2.2k resistor in series. A powerful electromagnet with ~40ohm resistance in parallel. It is possible the electromagnet has more components on the inside.
When I turn the switch on the LED shines for a second or so and then turns off/dies. I don't see how that's possible with a series resistance this high.
What should I do to have the LED light up at the same time as the magnet is attractive without dying?
AI: Do you have a catch diode across the electro magnet to clamp the high voltage pulse that it will create when you remove power? If not, this is your issue. The high voltage pulse will be at least 2x the supply voltage, and can be much higher, reverse biased across the LED. LEDs are not designed to stand off such a voltage, and thus it fails.
simulate this circuit – Schematic created using CircuitLab
Based on some of the other comments and answers, just wanted to clarify that catch diode, flyback diode, and free wheeling diode are all interchangeable terms for the same thing in this case. |
H: Controlling AC Motor with Relays
How can one control on/off and direction for a single phase reversing AC motor using relays? I have wired an AC motor correctly with 5 wires: green (ground), red (live), black (neutral), blue (coil 1), and yellow (coil 2).
If AC current is passed between [red+blue] and [black+yellow] it spins clockwise.
If AC current is passed between [red+yellow] and [black+blue] it spins counterclockwise.
I understand that speed cannot be controlled with relays and do not need that. For a DC motor, I can use two relays and diodes to achieve this functionality as outlined by this question How to use a flyback diode on a +/-12V motor
If it helps, this is the motor I am using https://www.baldor.com/catalog/L3503#tab=%22drawings%22, but I think the suggestion should be generalizable (please explain if I am wrong and the suggestion is motor specific).
AI: I am not entirely sure I parsed your question correctly, but...
The circuit below uses a DPDT (double pole, double throw) relay.
If the coil in the circuit is un-energized, the red wire will be connected to blue, and the black to the yellow.
If the coild is energized, the red will be connected to yellow, and the black to the blue.
Note that if one of your relay contacts becomes "sticky", you may create a short when the relay coil changes state, so it is best to fuse both the red and black wires, or the blue and yellow wires.
simulate this circuit – Schematic created using CircuitLab |
H: Determine voltage regulators for a circuit
I'm trying to determine if the switching voltage regulators I want to use in my circuit are suitable.
I want to use the regulators to change a 50V line into a 5V line that supports motors.
The motor has a nominal current of 0.7A. Its stall current is 1.6A.
I've read that motors draw close to their stall current when starting up, although it is only for a short time.
Should I choose my regulators based on stall current or nominal current?
Thank you for reading, any input would be appreciated.
AI: If the stall current is 1.6A, then the motor can reach it and/or come close to it during operation under heavy loads. Therefore, you need to be able to deliver that peak current. If possible, go above 1.6A, say 1.8A which gives you extra 12% margin. Always choose your components so that they can withstand critical conditions that may arise during operation, otherwise some unpleasant surprises can occur |
H: Momentary switch on-off circuit
I am trying to create a circuit to allow me to switch a project on and off with the activation of a momentary switch (a momentary push button or reed switch, for example), but I'm coming up blank and need help.
I have a few parameters:
Source voltage is 5v DC
Output load is less than 1mA - It's just driving the Enable lead on a chip (Consonance CN5711)
Output voltage should be as close to 5v as possible
The circuit must be "default on" - when power is first applied, the circuit needs to be on; actuation of the momentary switch would turn the circuit off and leave it off until the next time, and repeated actuation would cycle that (I hope that makes sense)
Must draw as little power as possible.
I have a working example of what I need in front of me but the company that made it used several unmarked components that I can't identify and is not willing to provide me that information, so I have to design my own.
Let me know if there's anything else you guys need.
Thanks
AI: The following circuit is one that I modified from Mosaic Industries.
simulate this circuit – Schematic created using CircuitLab
I used a relay in the schematic for ease of testing, but a switch should work equally well.
As per your specs, it starts in the "ON" state. The voltage is near 5V when on, and near ground otherwise. The current draw is minimal. It is designed to deliver 1mA to RL. RL is necessary if your load draws less than 1mA.
There is somewhat of a trade-off between how often one can cycle toggle the circuit, and how immune it is to switch contact bounce. I cannot give an exact figure, for how long the switch needs to be closed for proper operation, but it is somewhere in the vicinity of 50mSec. That allows a toggle rate of about 10/sec. If you want more bounce immunity, C1 can be tweaked, but if tweaked too much other components may need to be tweaked as well. |
H: Holding Current and Latching Current dilemma
I have gone through lots of resources looking up for holding current and latching current. All I can get is:
Latching current is the minimum current that is required to maintain the "on" state of the SCR.
Holding current is the minimum current below which the SCR turns off.
Considering the two transistor model of the SCR, if I feed the gate with very little current, this current induces a regeneration of current within SCR. Once the Anode to Cathode current reaches latching current after this regenerative process, the gate current can be cut-off. The SCR maintains it on state.
Now, in order to turn-off the SCR, I have to bring the Anode to Cathode current below holding current.
simulate this circuit – Schematic created using CircuitLab
My questions are:
Why does SCR need such high Latching current to keep it on? My understanding about the regenerative process makes me think even a tiny amount of current would suffice to turn the SCR on.
Why are Latching current and Holding current different? Why can't SCR just switch on once A-C current reaches Latching current and turn off if the A-C current is less than latching current? Or, with respect to holding current, why can't SCR just switch on once A-C current reaches holding current and turn off if the A-C current is less than holding current?
AI: For the model you have there the holding current would be extremely small. But it is not a perfect model.
Imagine for example modifying the model with a resistor between the gate and the cathode to represent the leakage current, say 1kilohm. If the current was less than about 500uA there would not be enough current into the base of the lower transistor to sustain regeneration and the SCR would turn off.
Also, the gain of the transistors (hFE) varies with the current which affects the holding current. |
H: calculate force on object between capacitor plates without knowledge of area
I am given an exercise with a very simple circuit
simulate this circuit – Schematic created using CircuitLab
I am asked to find the voltage of the voltage-source V1 such that an object with a given charge \$Q_{obj}\$ and mass \$m\$ of negligible volume lying between the plates of the capacitor C1 is hovering.
Equating a force with the gravitational acceleration and the mass of said object is no problem, I can figure that out, this is not what this question is about.
To find what force is acting on the object, I figured I want to find the electric field \$E\$ between the plates of the capacitor.
Now in all resources I could find, the area of the capacitor-plates is relevant for finding the electric field. However I am not given any area, just the distance between the plates \$d\$ is given. I think one usually assumes an infinite area in such case, but:
$$ \sigma = \frac{Q_{cap}}{A} $$
and
$$ E = \frac{\sigma}{\varepsilon} $$
thus in this case there should be no electric field, and hence no force acting on my object?!
How can I find the electric field between the plates of a parallel-plate-capacitor from just the voltage \$V\$ and the distance between the plates \$d\$, without the area \$A\$?
AI: In your current thought process, it looks like you're trying to find the charge on the capacitor (You denote it generically as \$Q\$, I'll denote it \$Q_{\text{cap}}\$), which is proportional to the area of the plates. However, since we know that \$Q_{\text{cap}}/C = V\$, you'll divide by the capacitance (which is itself proportional to the area), and your final result is area-independent.
A more straightforward formulation uses:
$$V = -\int \vec{E} \cdot d\vec{l}$$
Assuming the dielectic is uniform throughout the capacitor and edge effects are negligible (i.e. the particle is far from the edges), this turns into simple multiplication/division to relate the field to the capacitor voltage. You can subsequently relate the electric field and \$Q_{\text{particle}}\$ to the electrostatic force on the particle that should hover:
$$\vec{F}_{\text{coul}} = Q_\text{particle}\vec{E}$$
and of course this should be equal in magnitude and opposite to the gravitational force on the particle. |
H: What is this setup called?
What is the T1 and T2 setup called?
I'm sorry if the question is vague because that's how it was presented in the exercise
AI: traditionally it would be called a Common Source JFET driver to a Common Base (wide BW) NPN amplifier.
The vertical configuration is also a cascode..
It is “open loop” and may be improved with negative feedback for linearity. |
H: Grounding one port of SPDT analog switch
We are using a micro-controller to interface RGB matrix (HUB75 connector). Matrix row can be selected using signal pins (A-E). Some RGB matrix suppliers require to use (CLK/DIN/OE) on (A/B/C) pins to serially select/shift row. Pins (D/E) are connected to ground.
When we want to use first type, R1/R2 are soldered down, while R3/R4 are removed.
When we want to use second type, R1/R2 are removed, while R3/R4 are soldered with 0 Ohm.
This require to know in advance which matrix type will be used before releasing PCB (figure 1).
We would like to use analog switches so the matrix type can be selected in the field as in figure 2.
(1) Is it right to use analog switch in order to ground the two pins if connecting second type?
(2) Will the switch resistance (~6 Ohm) have effect since both sides are connected to ground. second type.
(3) Will the switch affect signals at ~20MHz?
Thanks.
AI: It is possible to use an analog switch in this way. For digital 20 MHz signals, the switch resistance and capacitance will not have any noticeable effect.
However, analog switches are more complexity than actually required, and when the D/E signals already are grounded, you do not need connect them to anything. The output is either the same as the input, or disabled; you can implement this logic with a three-state buffer. There are two-channel devices like the (SN)74LVC2G125; use 2× (SN)74AHCT1G125 if the control signal uses TTL voltage levels. |
H: Will this op-amp circuit work to convert differential audio into single-ended input for the bluetooth transmitter module?
I'm making a bluetooth audio transmitter based on FSC-BT802 module that needs to take an amplified Mono 3W 5V (from -10V to +10V) differential audio as Line-In input. So I need a differential to single-ended converter and a line level attenuator circuit. Then it will connect to both Left and Right MIC inputs of the bluetooth module. So I need to attenuate 5V to 500mV-1V input and convert it to single-ended. I already tried a direct differential audio input to MIC pins with those 4.7uF capacitors and input voltage attenuator resistors but always got noise in the output sound in the other bluetooth audio receiver. And single-ended input worked much better, without noise. The whole bluetooth audio transmitter module power supply will be a Li-Po 3.7V battery (3-4.2 Volts). Op-amps I found: INA134, SSM2141, ADA4807-1, TL071... which one will suit my application better? Or maybe there are better op-amps?
Am I understanding it right? Any suggestions how to improve the audio input circuit? Thanks!
FSC-BT802 module Datasheet:
FSC-BT802 module Datasheet PDF
Toy sound board speaker amplifier (my differential audio 5V 3W Mono output source):
MAX98357A
And this amplifier (audio output) working schematic:
My current bluetooth module audio input schematic:
I also tried direct differential input, but always got white noise in the sound on background:
And I even tried using just 1 of the differential audio input wires connected to both Positive Left and Right input channels and leaving the other wire not connected. The sound quality was better and less noise, but was a bit distorted with a little bit of white noise:
AI: Thanks for clarifying your question... if I understand you correctly, you'd like to get an op-amp that would run fine on +3V power supply. I've noticed this kind of a minimal rail voltage in the datasheets of the TLC271 or TLC272 (classic op-amps, single output, one or two modules per package) and in the THS4521 (fully differential op-amp, and a broadband model BTW).
Note that 3V in general are a pretty low supply voltage, considering what voltage a simple constant current source needs to operate, and how complex the internals of an op-amp can be... this requirement is non-trivial to meet.
Regarding your application, I am surprised that simple resistive dividers on the inherent differential input of the BT module did not perform well :-/ What does an op-amp bring, that makes it perform better? A lower drive impedance for the audio input of the BT module?
What kind of noise did you get? Just a higher level of white noise? Or, was it distortion of some sort? Or ingress EMI? If it was anything other than pure white noise, would you care to share a schematic of your pure passive input with resistive dividers? And, how is your diff input wired to the (external) signal source? How were signal reference grounds catered for?
Other than that, I've had instances where a chip datasheet did not contain the whole truth about its audio "ports", and I had to find out the hard way that its diff input was not really a proper diff input, op-amp style - even though the pinout did resemble an op-amp, upon a first casual look. Which would be a valid motivation to handle the diff input externally (convert to single-ended).
You have interesting toys to play with :-)
EDIT: take a look at the following picture.
I've checked the datasheet of your BT module, and it's pretty clear that the signal inputs are properly balanced. I don't see a reason to force them into single-ended mode. Perhaps you just never got the balanced divider just right in the first place.
Low input impedance (differential and against the local ref.gnd) is generally a benefit in terms of noise in the input stage - though if the inputs are some CMOS/JFET, the benefit won't be very significant. I understand that the "microphone" input can be configured for a pretty low gain (-3 dB ?) and effectively turn into a line-level input. I'd expect the chip's own input noise to be minuscule with the input gain dialed all the way down.
If you're after a good bass response, try increasing the coupling capacitors in the signal path. The lower the resistance of the attenuator/divider, the larger the capacitors should be - consider the cut-off frequency of the resulting RC low-pass filter. This applies especially to the coupling capacitors at the board's input - less likely a problem with the coupling caps at the codec chip input (the internal biasing resistance will likely be relatively high).
If your signal source actually has PWM output, you're possibly in for some "hard cheese" :-) A dual-trace oscilloscope would be invaluable to weed this out.
I still believe that the best op-amp in this case is no op-amp :-) especially if you're constrained by the crippling low supply voltage (down to 3V off a single LiIon cell).
Hmm. Unless the balanced input in the codec chip has really poor common mode rejection ratio, and your signal source has a nasty common mode AC component to it - in that case a fast op-amp might help, provided that you can steer in the pretty narrow corridor of valid operation between the supply rails. So if you end up in this configuration, consider attenuating the input signal a little more and increasing the gain in your codec a bit after your discrete op-amp.
All other conditions equal, an op-amp will exhibit relatively lower slew rate (and CMRR) towards the lower end of the permitted supply voltage range :-(
EDIT: In response to your question in the comments, "Also I saw some op-amps with pairs of equal resistors: 1k, 12k, 10k, 25k... which should I use? Or any will work?"
Symmetrical op-amps, such as the THS4521, have two feedback dividers in the basic topology (four resistors total). See also this nice appnote - feel free to skip all the humiliating math, look at the basic topology pictures (the first one on page 6 I guess). The THS4521 also allows you to set the desired "common mode output center potential", using a dedicated pin, independent of the inputs and outputs and feedback - the only rule is, that the center potential should be someplace roughly half-way between the power supply rails, and in the center of your codec chip's input range (individual range per input pin).
Actually a typical op-amp-based "differential amplifier" topology (for balanced to single-ended conversion) also contains two dividers and four resistors total, right? :-) Just like you have demonstrated in your first schematic here.
As for "what resistor values work best" : for audio, the typical values of feedback resistors in op-amp circuits is a couple dozen kiloOhms. For BJT-based op-amp inputs, you'd better aim for low kiloOhms of "circuit input resistance", if your signal source can drive that - it will suppress the own noise of any BJT inputs somewhat. Your AC coupling capacitors will get correspondingly bigger to keep the bass cut-off frequency low, if that's an issue. $$ f_c = \frac{1}{2 \cdot \pi \cdot R \cdot C} $$
In other words, especially the op-amp's input transistors actually matter in terms of own noise, which is further affected by the signal source resistance that they see. But, this mostly matters with BJT's, having a pretty finite input impedance. With CMOS or JFET-based op-amps, going low with the impedance of your surrounding resistors won't help much. Quite the contrary, you won't hamper the input noise much if you go for high resistances (which allows you to use smaller AC coupling capacitors, may decrease power consumption etc) - but don't overstretch that too far, because the circuit may also become more sensitive to EMI ingress.
Today's op-amps have a full totem pole stage on the output (as opposed to an open collector against a pull-up resistor) i.e. they pull both up and down, and have a fairly low output impedance and a pretty good load driving capability. Some bread-and-butter BJT op-amps such as the NE5532 can allow you to drive 32-Ohm earphones directly (preferably from a low rail voltage, like +/- 5V, to keep the heat dissipation in the op-amp at bay). Note that the apparent output impedance of an op-amp is "virtually lowered" by the negative feedback loop, trying to "hammer the output voltage home" (using the op-amp's extreme open-loop gain) - but there are certainly inherent limits to the actual current source/sink capability of an op-amp output totem :-) and also limits to the totem's thermal harm capacity. Some are possibly short-circuit-tolerant (against a centered GND), some may be not.
Note that loud pink noise (rather than sharp white noise) possibly with pops / rattling / distortion mixed in, can be a symptom of RF oscillation someplace in the circuit. You cannot hear the oscillation itself, what you hear is some envelope or average in the audio band.
I wouldn't be surprised if a signal source with class D output would do something along those lines to your line-level ("microphone") input.
EDIT: so we now know exactly what your signal source is. Thanks for the datasheet of the MAX98357. Interestingly, there's a whole lot of stuff, but not a word about the PWM switching frequency, and among all the nice graphs, there's not a single oscillogram of the PWM output - probably considered the sweet secret. Well the true story to me is the chapter called "Filterless Class D Operation", and the sentence about "The device relies on the inherent inductance of the speaker coil and the natural filtering of both the speaker and the human ear to recover the audio component of the square-wave output." Well a standard speaker and the human ear might have mercy with the PWM rectangular output, but certainly not the ADC. Take a look at this image (the source page contains an excellent explanation of class D). You are feeding that rectangle into an ADC. Heh I cannot exclude the possibility that it did indeed sound better through a cheap and slow op-amp :-) But you should get a better result by dedicated filtering, even passive. I have updated my sketch:
The schematic now contains two suggestions of where to put a low-pass filter. The green version places the filter on the signal source PCB, just after the PA chip. The blue version is implemented by just adding two capacitors to the balanced ground-referenced divider just before the bluetooth ADC. Note that the "R" in this blue RC filter is not the 9k1 series resistance, but the 9k1 parallel with the 1k to GND (or so I hope). So the blue capacitors need to be calculated for combination with something just under 1k Ohms.
If you have the possibility, you can implement both the green and blue filters, to gain a roll-off of -40 dB per decade. Or you can implement a cascaded filter in some other fashion. You should not lead the signal over a long distance wiring without filtering at the source. The raw output of the filterless class D amp is a radio frequency rectangle with pretty steep edges.
Hmm. I'm still wondering if the MAX98357 produces some half decent balanced signal, i.e. if both the outputs are switching in sync, just in opposite polarities. Because if for instance one output would always stay clamped low and the other output would PWM high, just exchanging turns upon zero crossings, the result after the filter I suggest would not necessarily be what one would expect. One look with an oscilloscope and all would be clear :-)
And in case that the MAX98357 indeed uses some counter-intuitive modulation with a strong common-mode component, again I would give another thought to your approach with an op-amp, only I'd use a THS4521, bias the output to a midpoint between the supply rails and if I understand the balanced feedback topology correctly (A = R2/R1 , because each polarity is in effect "inverting"), you can just "flip the divider" and turn the differential amp into an "active differential attenuator" :-) The attenuating divider would allow the circuit to operate cleanly with rail-to-rail input (or even beyond the power rails) and the attenuated output would have no problem to stay inside the window bounded by PSU rails. And, the output would already be truly balanced.
And in that case, I'd also certainly RC-filter the output of the MAX98357, I'd just place the capacitor differentially between the signal lines. Maybe with two smaller caps going to ground, not sure (to help the THS4521 keep pace with the slew rate of the common mode part that it would need to cancel out by principle of its operation). |
H: Could surface mount resistors cause a rise of 10°C
I'm new to electronics and designed a four-layer SMT board,
I am seeing a temperature rise of around 10°C (17°C ambient to 27°C after a few minutes) of a few components and the PCB, and I'm not sure why or if it's even a problem.
Using a thermal camera I have narrowed it to be in the area surrounding the optoisolators.
Using the probe on my multi-meter the opto's are 27°C after a few minutes.
DIR_F_L ../.. is 5 V from an Arduino 22 mA.
FLOUT ../.. are 5 V with 1k pullup 5 mA.
I believe the resistors could be a potential source of heat generation as they are 0805 1/8 W.
AI: My apologies - in my earlier comment, I used 5mA as your driving current straight from the datasheet and didn't double-check your resistor value. With a typical forward voltage for the LED of 1.2V, your 560 ohm resistor is dropping 10.8V at a current of 19.3mA. Power dissipation inside the opto is 23mW, well within limits, but your resistor is dissipating 209mW, which is closer to the 1/4W rating than is really proper. The Vishay data therefore gives a very conservative (high) temperature rise of 28C on a "normal" copper board with natural circulation. I would recommend bumping the resistor up to 1.07k to drop current to ~10mA which should be enough to pull down your output at worst case current transfer ratio and reduce the power dissipated to 107mW. |
H: Value of digital potentiometer when not powered
I have a circuit where upon startup, it selects a voltage source with a programmed current limit via a resistor. Though I want a uC down the line to change this when it first powers up to protect other circuitry. Therefore I want to use a digital potentiometer. The datasheet tells me the power on value but what is the value between wiper and terminal A when the power is off? I could place a "startup" via a transistor and then have the uC disconnect the resistor by influencing the transistor but I wanna save on these 2 parts if possible.
I am using this digital potentiometer: https://ww1.microchip.com/downloads/en/DeviceDoc/20001978D.pdf
This is my circuit I am about the finish up:
Edit to solve chicken and egg problems.
AI: I think the circuit has a chicken-and-egg kind of issue. The pot needs supplies to operate, but supply also needs pot to operate.
The digital resistor state is random when supply is less than 1.8V and it won't be within specs of the datasheet.
However if supply is below that, like 0V, the analog muxes that connect the wiper cannot turn on at all, so resistance would be quite high, as in disconnected.
And on a side note, all the pot pins must be within 0.3V of supply voltages, so with 0V supply, the protection diodes start to conduct at about 0.3V on the wiper pin and backfeed current to pot supply.
And the powerswitch would see the high current limit resistance and would have quite a low current limit, so it might turn off when starting up. |
H: How to make a very large lookup table in Verilog?
I have a requirement to make a verilog module that takes a Gray code integer i and returns the Gray coded integer i-1 using combinatorial logic only. When I look up examples of Gray decoding, for instance, a lot of them use sequential logic. I also need it to work on integers of size >20 bits.
The solution I came up with is to write a Python script that generates a verilog file for the appropriate bit width. I first tried using a big case statement in an always @(*) block as in this 3-bit example:
module gray_decrement(input [2:0] x, output [2:0] xm1);
always @(*) begin
case( x )
3'b001: xm1 = 3'b000;
3'b011: xm1 = 3'b001;
3'b010: xm1 = 3'b011;
3'b110: xm1 = 3'b010;
3'b111: xm1 = 3'b110;
3'b101: xm1 = 3'b111;
3'b100: xm1 = 3'b101;
default: xm1 = 0;
endcase
end
endmodule
That doesn't compile with an Error (10137): Verilog HDL Procedural Assignment error at graycode.v(1051): object "xm1" on left-hand side of assignment must have a variable data type. If I make xm1 an output reg it doesn't synthesize how I need it to. So I went back and redid it as a really big assign statement using the ? operator as in this 3-bit example:
module gray_decrement(input [2:0] x, output [2:0] xm1);
assign xm1 = x == 3'b001 ? 3'b000 :
x == 3'b011 ? 3'b001 :
x == 3'b010 ? 3'b011 :
x == 3'b110 ? 3'b010 :
x == 3'b111 ? 3'b110 :
x == 3'b101 ? 3'b111 :
x == 3'b100 ? 3'b101 :
3'd0;
endmodule
This works great and synthesizes properly, but after 10 bits (i.e. 1024 elements in my nested ? operation), Quartus errors out with parser stack overflow error.
So the question is what is the best way to extend my assign statement so that it could have ~2^20 lines in it? Is there some generate magic that I could use since the arithmetic is dirt simple? Or will I have to do some of the logic simplification in my Python preprocessor?
AI: Your first example errors because you must use a variable data type on the left hand side in a procedural block.
xml is the left hand side of the assignment, and so must be a variable (reg,integer,etc.), not a net (wire). Outputs default to being wire unless otherwise specified. You can fix this with an output reg [2:0] xml in your port declaration list.
Alternatively, given you are basically describing a ROM, you can use an initial block to set the default values for your lookup table:
localparam SIZE = 1024;
reg [2:0] rom [SIZE-1:0];
initial begin
$readmemb("rominit.txt", rom);
end
assign xm1 = rom[x];
Alternatively, given you are using Quartus, you could use a MIF file:
localparam SIZE = 1024;
(* ram_init_file = "rominit.mif" *) reg [2:0] rom [SIZE-1:0];
assign xm1 = rom[x];
Finally, and perhaps more simply, your mapping can easily be described algorithmically:
module gray_decrement #(
parameter WIDTH = 3
)(
input [WIDTH-1:0] x,
output [WIDTH-1:0] xm1
);
genvar i;
generate
// First convert to binary
wire [WIDTH-1:0] bin;
assign bin[WIDTH-1] = x[WIDTH-1];
for (i = 1; i < WIDTH; i = i + 1) begin : gray2bin
localparam idx = WIDTH-1-i;
assign bin[idx] = x[idx] ^ bin[idx+1];
end
// Then subtract one
wire [WIDTH-1:0] binm1;
assign binm1 = bin - {{{WIDTH-1}{1'b0}},1'b1};
// Then convert back to gray code
assign xm1[WIDTH-1] = binm1[WIDTH-1];
for (i = 1; i < WIDTH; i = i + 1) begin : bin2gray
localparam idx = WIDTH-1-i;
assign xm1[idx] = binm1[idx] ^ binm1[idx+1];
end
endgenerate
endmodule
The above should optimise away to a lookup table which should decrement the gray code value.
(As a side note, your lookup table is wrong - the default value should be 4 not 0, otherwise a value of 0 maps to 0, and nothing maps to 4). |
H: Resistance changes when keeping two ends of a metal bar constant but increasing the thickness of the middle?
In this imaginary scenario I am taking a metal block and measuring the resistance. If I were to add a piece of metal in the middle of the metal block (i.e. the metal now has a bulging middle but the two ends of the block are still the same dimensions), would this change the resistance?
From the resistivity equation, I don't think it will but I was wondering if anybody could verify this.
Many thanks.
AI: The answer to this question is not quite as simple as suggested in the comments above.
All materials have a bulk resistivity measured in ohm-meters. From Wikipedia, we have:
Electrical resistivity (also called specific electrical resistance or
volume resistivity) and its inverse, electrical conductivity, is a
fundamental property of a material that quantifies how strongly it
resists or conducts electric current. A low resistivity indicates a
material that readily allows electric current. Resistivity is commonly
represented by the Greek letter ρ (rho). The SI unit of electrical
resistivity is the ohm-meter (Ω⋅m).12 For example, if a 1 m × 1
m × 1 m solid cube of material has sheet contacts on two opposite
faces, and the resistance between these contacts is 1 Ω, then the
resistivity of the material is 1 Ω⋅m.
For example, copper has a volume resistivity of 1.68 × 10−8 Ω⋅m at 20 °C.
Note the units. If you divide volume resistivity (Ω⋅m) by cross sectional area of a conductor (m²) you get Ω/m which is the resistivity of that conductor per meter.
Which is to say - if you increase the cross sectional area of the middle part of your conductor, you will decrease the resistance by some amount.
However this isn't the entire story.
For example, what is the resistance of the below conductor?
Another way of asking this is - when I apply a voltage across the ends of this conductor, where does the current flow?
The answer is that the current is not equally distributed across changes of cross sectional area (or, for that matter, at sharp bends in a conductor.) This is the "stream line" effect as discussed in the Copper Development Association publication Copper for Busbars, section 6.3.1 Jointing of Copper Busbars - Joint resistance - Streamline effect.
As with all decent engineering questions - the answer is ultimately "it depends." |
H: Beginner question about OP AMP as "DC differential amplifier". Does switching inputs between + and - make any difference?
I'm just starting to learn about OP-AMPS used to control current through a mosfet and I have a few basic questions about the general concept of the schematic below, taken from this thread: op amp+mosfet = current source.Why do we need a feedback resistor?
I'm only asking about the general concept, not about the miller caps and 10k feedback resitors as discussed in that thread.
In the schematic above, is that what's called a "DC differential amplifier"? Are there other names for this type of op amp circuit?
Would it make any difference at all if I switched the inputs around, having the control voltage VC go into the - input on the op amp, and having current sense resistor voltage drop go to the + input? As the op amp regulates it's output to ensure + and - have the same voltage, my assumption is that switching inputs around as I described should make no difference at all. Could that be right?
Could that op amp circuit (with mosfet removed) be used to "convert" a voltage controlled boost IC into a current controlled circuit by replacing the "standard" boost IC feedback divider with the output from the op amp?
Edit: Attempting to clarify this last question. The TPS63020's output is controlled by what voltage is sees on the FB pin.
I'm asking if I can remove that R1 and R2 divider and using an op amp somewhat like in the first schematic instead:
AI: is that what's called a "DC differential amplifier"?
No.
Would it make any difference at all if I switched the inputs around
Yes. The circuit would no longer work as designed.
my assumption is that switching inputs around as I described should make no difference at all. Could that be right?
No.
Could that op amp circuit (with mosfet removed) be used to "convert" a voltage controlled boost IC into a current controlled circuit by replacing the "standard" boost IC feedback divider with the output from the op amp?
I don't understand. Could you provide a schematic? |
H: Opamp Design Considerations
Opamps are a great device for amplification. Recently i used a TL084 which is quad-channel Opamp
I used three opamps in the device for getting gains of 1 , 2 , 3 respectively. I noticed that, without any connection to the input, there was Voltage at the output of 0.4V. Please Help me understand what the problem is. Can a Pull Down Resistor help here? and also will a Series resistor to the input help? Please suggest good practices for the opamp design.
AI: Can a Pull Down Resistor help here?
It might. It depends on your type error. OpAmp inputs are high impedance. So they tend to pick up noise or static charges, if their inputs are not connected to a fixed potential. A pulldown resistor is one way of solving this issue.
and also a Series resistor to the input will help?
No. As I said. OpAmp inputs are high impedance. This means, that there is no current flowing into them (for an ideal OpAmp). Without a current flow, a series resistor at the input will be useless.
Keep in mind that you most likely also have to deal with non-ideal behavior of real OpAmps.
Supply Rails
Your circuit doesn't show your supply voltages of the OpAmp. However, keep in mind, that OpAmps can only generate output voltages within their supply voltages and, depends on your type, can only take input voltages in limits with your supply voltages.
If you supply your OpAmp with + 10V and 0V, the output cannot reach 0V. There are rail-to-rail OpAmps which allow you to get very near their supply rails, but even these, will never reach you supply rails at the output.
The datasheet of TL084 says in its recommended operating condition $$VCC+ = 5V$$ and $$VCC- = -5V$$
In this case an output of 0V is perfectly valid as it is perfectly in the middle of the supply rails. Just be careful when powering these opAmps asymmertrically.
Input Offset Voltage
As you probably know, an ideal opamp has infinite gain and will amplify any voltage between its positive and negative input. When connecting both the positive and the negative input to the same potential, the OpAmp's output is 0.
This is the essential behavior, that allows you to handle an OpAmp with negative feedback as if its inputs are always on the same voltage.
This however, only holds true for ideal opAmps. Real OpAmps have something, called input offset voltage. This input offset voltage is caused by asymmetries in the design and in the manufacturing of the OpAmp. The input offset voltage basically adds to your input voltage and the opamp behaves at if it is seeing an input voltage higher than the actual one. When wiring OpAmp circuits at high gains, the input offset voltage is also amplified.
The TL084 states an input offset voltage of at max +/-5 mV (depends on your specific type). When wiring a non inverting amplifier with a gain of, let's say 100, you will get an output voltage of +/-500 mV although the input is connected to 0V.
You can read more about it in this tutorial by AD |
H: Why all voltage drop across NMOS MOSFET in ngspice
I am new to the ngspice simulation and when I run a circuit into ngspice, it drops all the voltage across a mosfet but this is not the case when I simulated the same circuit into LTspice.
Any help would be really appreciated!
The circuit used for simulation in LTspice and ngspice engine:
Output at node 3 in LTspice:
SPICE used for ngspice simulation:
MOSFET in ngspice
V1 1 0 100V
M1 1 2 3 0 nmod W=1 L=0.5
.model nmod NMOS level=49 Version=3.1
V6 2 3 DC 0V PULSE(0 10 2NS 2NS 2NS 50NS 100NS)
R1 3 0 100
.options TEMP = 25C
.options TNOM = 25C
.ic
.tran 0.1u 1u 0 0
.control
run
plot v(3)
.endc
.end
Ngspice output at node 3:
AI: Your circuits used between the two programs are far from equivalent. For starters, you're using totally different MOSFETs in each netlist. In LTspice, you're using a VDMOS model for the Infineon BSC030N03MS. The parameters are defined in Documents\LTspiceXVII\lib\cmp\standard.mos. I copied and pasted the relevant line below:
.model BSC030N03MS VDMOS(Rg=1.5 Vto=2.52 Rd=1.65m Rs=254u Rb=834u Kp=388.3 Lambda=0.07 Cgdmin=73p Cgdmax=1.02n A=0.6 Cgs=4.17n Cjo=3.23n M=0.3 Is=25.4p VJ=0.9 N=1.11 TT=3n ksubthres=.1 mfg=Infineon Vds=30 Ron=3.8m Qg=27n)
A couple things to unpack here. First, if you look at the model parameters the threshold voltage is the Vto and has a value of 2.52V, which you should be aware of. Next, is the issue already addressed in the question's comments. VDMOS models are for discrete power transistors, which are constructed in a way where the source and body are always tied together. The nmos 3-terminal symbol you used in LTspice always copies the source node to the body node. For integrated circuit MOSFETs, which we'll discuss below, you typically want to use the nmos4 4-terminal symbol so you can independently connect the body/bulk connection.
In your ngspice netlist, you are using a Level=49 Version=3.1 MOSFET which is a BSIM3 v3.1 model. Lots of problems here. First is that these are models for integrated circuit MOSFETs, and not discrete ones. Second is that v3.1 of BSIM3 is outdated and not recommended for use according to the ngspice manual (section 11.2.10). They recommend only using v3.3.0. Third, you set the width to 1meter and length to half a meter. That's a rather large MOSFET...probably takes up more space than my desk, which means it won't operate properly. Last, you didn't specify any parameters for this MOSFET, which means it uses all default values. You need to check out the BSIM3 manual link found within the ngspice manual to see all the defaults listed, beginning in Appendix A.2. For example, the default threshold voltage is 0.7V for a BSIM3 v3.3.0 NMOS.
The last thing to address is your pulse source, which is also different between the two circuit netlists. You didn't do us any favors by hiding your PULSE definition for V2 in the LTspice schematic, but I was able to roughly figure it out. You are on two totally different time scales. The LTspice one roughly has a period 20µs while the ngspice one has a period of 100ns. This is a big difference since MOSFETs have switching characteristics depending on how fast you can charge/discharge the gate.
For your specific scenario, I recommend you make sure to use the same exact PULSE voltage source and also copy that .model BSC030N03MS VDMOS line referenced above into your ngspice netlist since ngspice also supports VDMOS models. You shouldn't be using a BSIM3 for this kind of experiment, especially that gigantic one with all default values.
You can actually run your ngspice netlist text files within LTspice. Just open them up and hit Simulate -> Run. The only thing you need to do is remove the .control / .endc section or add a * at the beginning of each of the lines for that section since LTspice doesn't recognize them. I do this a lot when comparing outputs between both programs. |
H: Why do most logic ICs (e.g. AND gate) have more than two inputs
I imagined the simplest AND gate in context of a small microcontroller project operating at 3.3V with two input pins and two one output pin. This is not the case in reality though, I didn't find one AND gate chip that wouldn't also have VCC and Ground pins.
I'm trying to understand better (coming from programming side of things) and I'm assuming this is a design choice. Two questions related to this:
Can AND gates be built without VCC and GND be built (power from input pins) and does this even make sense in a arduino-like environment?
Why are GND and VCC usually needed?
AI: Power pins are required for the gate to have gain, which is necessary for fan-out, meaning that one input can drive an output that (reliably and quickly) drives many more inputs on other gates (which may have high capacitance loading or current requirements). That's where the simplified textbook examples of gates typically fail the real-world usefulness test.
There's also no way for a gate without a power (ground) pin to source current if no inputs are high (and vice versa for sinking with all inputs high and no ground pin).
In some cases if the power connection is missing from a CMOS gate it can continue to function (albeit poorly) because the protection networks act as an "OR" gate, so if any input is high, the Vdd node sees that input voltage minus a diode drop plus. If there is a bypass capacitor, it may even continue to function even if all the inputs momentarily drop low.
This has fooled more than one person troubleshooting a circuit, since they might expect the gate to behave more passively without power deliberately applied. It's generally a very bad idea to do this intentionally, though there may be some cases where it is justified (and it is guaranteed to get clucks of disapproval and close inspection from 3rd parties). One of the issues is potential latchup. There are others. |
H: How do digital function generators generate precise frequencies?
I was designing an electronic circuit that requires precise frequency control when I realized how much I don't know about the frequency synthesis itself.
For example, when I take my low-end 2-channel arbitrary waveform generator, I'm able to set one of the channels to a sinewave with a frequency of 6.00000000 MHz and the other one to 5.99999999 MHz. When I multiply those two signals and filter out the higher frequency what I get is a precise sine signal with a frequency of 10 millihertz. So the period difference between the two waveforms should be the astonishing 278 attoseconds. That would require a 3.6 PHz of bandwidth just to distinguish the difference over one period! And yet my AWG is able to generate any frequency from 0 to 6MHz with 10mHz precision and with the thermal and phase drift accuracy of a crystal oscillator. (I've checked with an oscilloscope and the frequency is stable up to a 6-digit precision, can't check any further).
So it happens to be beyond my understanding how this can be accomplished (especially under $60). Using frequency division/multiplication would be unthinkable at that bandwidth and any VCO I know of has far more jitter and will eventually drift too much.
Am I missing something? Does any of you engineers know what's happening inside such cheap digital generators?
AI: That sort of generator uses DDS, or Direct Digital Synthesis
It keeps track of the phase of the required output in a register, and outputs the cosine of the phase.
To use nice round numbers, let's say you have a 10 MHz clock, and want to generate 1.000000 MHz. Each 100 ns clock cycle, your phase register gets incremented by 0.1 of a cycle. If you want to generate 1.000 000 001 MHz, then each clock you'd increment by 0.100 000 000 100 of a cycle.
Frequency resolution is cheap, you just give your phase register enough LSBs. In this case, mHz resolution with a 10 MHz clock, you'd need 10 digits if the sums were done in decimal, or as more likely at least 34 bits if they were done in binary. With binary arithmetic, sometimes the master clock will be a nice 'binary' frequency to get nice Hz resolution. With cheaper generators the designers often just throw another load of LSBs at the phase register, and have very fine resolution of a nasty binary Hz fraction, which is 'fine enough'. My low cost generator has μHz!
Only the top few MSBs, typically 10 to 16 depending on the quality (cost) of the output, get to drive the phase -> cosine converter. As a result, the phase at your output is never more accurate than a few parts per thousand. However, the output phase error is non-cumulative, and over time, the waveform will keep the correct average rate of change of phase (aka frequency) with respect to the clock, and to any other channels being generated by the same clock.
So the frequencies are precise, the phases approximately so. It's this approximation to phase that means that DDS outputs have phase noise spurious outputs. These cannot be seen on an oscilloscope, but will be visible on any modest spectrum analyser. |
H: How to design a balun circuit for an RF application?
I am designing balun circuit for the first time.
Baluns are mainly used to convert an unbalanced signal to a balanced one or vice versa, 50 ohm to 100 ohm impedance matching.
Can somebody tell me how to design the pi-pad attenuator circuit for the 100 ohm impedance side and the selection of the AC coupling capacitor?
In my application, I am using the balun TCM1-63AX+ for the range of signals from 10 MHz to 4000 MHz to interface to a Xilinx RFSoC XCZU28DR-2E RF fabric.
Also attached reference circuit design of the evaluation platform.
Xilinx RFSoC website link.
AI: Input impedance is 50 Ohm Output impedance is 100 Ohm
For an unbalanced 50 Ω to 100 Ω impedance matcher you can use this online calculator: -
Rin is the input impedance and RL is the load impedance.
To make that circuit balanced, you need to split R1 into two resistors of equal value: - |
H: Driving lamps with the fewest number of NPN switch transistors
I'm building a small system to drive some Nixie lamps and I'm missing some of the classic ICs that could make this design significant simpler. I ended up with some discrete NPNs to select the lamp and another line to select the digit.
Is there some other, significantly simpler, way to arrange these transistors to make a better use of them and achive the same? i.e. one 5V open drain for lamp and one 5V open drain for the digit.
AI: If you control the system with a microcontroller, you can avoid Q3 and Q4 by having one control line per digit. If the UC doesn't have enough output you can use a shift register.
You could use Mosfet / BJT arrays like this one to have fewer components but might be more expensive.
Alternatively, you can multiplex the lamps from the high side as follow:
You can multiplex at a frequency that is not visible to the eye (few kHz), although the overall brightness will be lower, you can probably drive the nixie harder as well.
Select proper Mosfet and use proper protection, especially for the high side. You can also use BJT.
Example for high side drive using BJT: (using random components)
simulate this circuit – Schematic created using CircuitLab |
H: What is the amplitude of the output voltage?
The question with its answer is as shown below:
However, I have solved it as follows:
Am I right or not?
AI: What is the amplitude of the output voltage?
Your answer is fine but you have misinterpreted what is meant by output amplitude. Whenever we talk about a signal amplitude (with no other constraints) we talk about an RMS value hence, you calculated the RMS value correctly at 20 amps but then assumed that the peak value of 28.28 volts was required. It isn't required.
An alternative way recognizes that the 200 μF capacitor has an impedance magnitude of 50 Ω when ω = 100 radians per second. This means that the magnitude of Vout/Vin is 0.7071. Input RMS is 40 x 0.7071 therefore output amplitude is 20 volts. |
H: Is it possible to change the reference clock in a cpu while it is running
I understand that PLLs have a settling time but if I were to do a hardware modification that changes Vco of a reference clock to scale based on temperature of my CPU, would I be able to do so while the CPU is running or would the system lock due to PLL variance?
If so a corollary: what is the maximum frequency I can adjust the base clock of the PLL while the output frequency can remain marginally stable? Is it the Nyquist rate or the 1/(settling time)?
My understanding of PLLs is elementary please be gentle thank you.
AI: As noted in the comment, there is no general answer to this. The answer is going to be based on your specific case. Let me, however, show you how to find what you're looking for in an example.
In this case I have taken an excerpt of a datasheet for a part that I'm familiar with, the Microchip (i.e. Atmel) ATxmega128A4U MCU. Note this page from the document at:
Microchip ATxmega128A4U Datasheet
If you look at the highlighted (in yellow) specification, it says that for this part, you cannot change the input clock period by more than 10% from one clock cycle to the next. What might happen is not specified but if you are doing it anyway, be prepared for indeterminate behavior.
I do note that the chip itself has internal selections for switching between 32MHz, 2MHz, and 32KHz clocks and there are not stated restrictions on switching these in software. So I assume that the chip designers have accommodated that somehow.
I'm going to guess that the reason for the 10% limit is that the internal PLL can only track a changing clock period so quickly and if you go beyond 10% up or down, the PLL may lose it's lock. |
H: Doubts in using UCC27537 Gate driver to drive IPA60R120P7XKSA1 MOSFET
I want to use UCC27537 gate driver to drive IPA60R120P7XKSA1 MOSFET but I have some doubts.
To avoid voltage oscillations at the gate, do I need to include resistors in between MOSFET and gate-driver?
According to the datasheet, the gate driver can source up to 2.5A and sink 5A, so is it the current when no resistor is connected between the MOSFET and the gate-driver or we have to limit it to the given current?
in the datasheet of MOSFET, it is given that the MOSFET gate resistance is around 8 ohms, so should I connect external resistors?
If the resistor is required then a lot of power will be lost through this resistor, is there any special resistor for this? or recommend me any if you have any idea...
please help me, I do not want to burn gate-driver and also want no oscillations.
AI: To avoid voltage oscillations at the gate, do I need to include
resistors in between MOSFET and gate-driver?
The purpose of the resistor is to share power and to reduce overshoot (A side effect of ringing). This resistor value may not be necessary or it might be crucial. Your best bet is to just try different resistor values and see what works best. Often times a 0 Ohm resistor works just fine.
According to the datasheet, the gate driver can source up to 2.5A and
sink 5A, so is it the current when no resistor is connected between
the MOSFET and the gate-driver or we have to limit it to the given
current?
The gate driver is essentially a current source. It will provide that much current at whatever voltage it can.
The actual current at any time is found through analysis of the circuit. The circuit shown here is a simplified version of what is happening. It will give you a good idea of what the current is. In this example the instant the gate driver voltage goes high the current in the circuit will be 1.5A. Then the current will decay as the capacitor charges up.
simulate this circuit – Schematic created using CircuitLab
in the datasheet of MOSFET, it is given that the MOSFET gate
resistance is around 8 ohms, so should I connect external resistors?
Test the circuit. This largely depends on the physical layout of the circuit. Long traces add inductance and can lead to overshoot. So, if you see overshoot or ringing add a resistor.
If the resistor is required then a lot of power will be lost through
this resistor, is there any special resistor for this? or recommend me
any if you have any idea...
A resistor is a resistor it dissipates power. (P = I^2R) Reducing resistance without increasing current will result in less power dissipated. |
H: Can you help me understand this Node Analyzing Method in Thevenin Norton Circuits
This is the circuit, and the node method solution is:
Where did 280 come from? why is there two Vt/60 ? Please help me
AI: Where did 280 come from? why is there two Vt/60 ?
As chomeyama said, the solution is correct. The KCL equation is incorrect. The constraint equation is correct. So just ignore the equation with which you have doubts.
Here's how to obtain the correct equation and the result. Redraw the circuit as shown below:
Apply KCL at the node of \$v_\text{T}\$:
\$ -\dfrac{v_\text{T} - (-160 \, i_\Delta)}{80} - \dfrac{v_\text{T}}{80} - \dfrac{v_\text{T}}{40} + 1 = 0 \tag*{}\$
\$ \implies -\dfrac{1}{80} v_\text{T} - \dfrac{160}{80} i_\Delta - \dfrac{1}{80} v_\text{T} - \dfrac{1}{40} v_\text{T} + 1 = 0 \tag*{}\$
\$ \implies \left( -\dfrac{1}{80} - \dfrac{1}{80} - \dfrac{1}{40} \right) v_\text{T} - 2 \, i_\Delta + 1 = 0 \tag*{}\$
\$ \implies -\dfrac{1}{20} v_\text{T} - 2 \, i_\Delta = -1 \tag 1\$
Constraint equation for the dependent source: express the controlling variable (\$i_\Delta\$) in terms of the nodal voltages. Applying Ohm's law in the resistor of \$\text{40 } \Omega\$:
\$i_\Delta = \dfrac{v_\text{T}}{40} \tag 2\$
Now we have two equations ((1) and (2)) and two unknowns (\$v_\text{T}\$ and \$i_\Delta\$), so we can solve the eqs. to find the unknowns. Substituting eq. (2) in (1) yields:
\$ -\dfrac{1}{20} v_\text{T} - 2 \left( \dfrac{v_\text{T}}{40} \right) = -1 \tag*{}\$
\$ \implies -\dfrac{1}{20} v_\text{T} - \dfrac{1}{20} v_\text{T} = -1 \tag*{}\$
\$ \implies -\dfrac{1}{10} v_\text{T} = -1 \tag*{}\$
\$ \implies v_\text{T} = 10 \tag*{}\$
The following simulation proves the previous result (please ignore the voltage source in series with the resistor of \$\text{40 } \Omega\$, it has 0 V so it doesn't affect the circuit): |
H: How many wiring combinations exist for an n-pin cable/connector?
This is more a pedantic question about the possible number of wiring permutations inherent to an n-pin connector and cable. Note combinations imply that elements may be used more than once (combined), whereas permutations do not.
I occasionally have to fabricate custom cables for esoteric devices or troubleshoot them (example cable cut or missing, service manual unavailable, etc.) On a four or nine-pin connector, this isn't a problem because the number of permutations is very low, some pins are unused or are otherwise identifiable, or the connector follows some standard. But what about a 16-pin touchpad connector, where all pins are used and none have a definite, identifiable function? How many wiring permutations exist for this connector?
AI: The answer involves the vast mathematical field of combinatorics.
If we consider the permutation formula, \$_nP_k\$ or \$P(n,k)\$, which is to say "k-permutations of n", n is the "total number of elements" and k is the "sample size."
$$ _nP_k = \frac{n!}{(n-k)!} $$
This formula is specifically for permutations, where elements are not duplicated and element order matters. For our purposes, we can say n is the total number of pins, and k is the used number of pins. So calculating for ordered permutation, with no duplication of 16/16 pins:
$$ _nP_k = \frac{16!}{(16-16)!} $$
$$ _nP_k = \frac{16!}{0!} $$
$$ _nP_k = \frac{16!}{1} $$
$$ _nP_k = 16! $$
$$ _nP_k = 16\times 15\times 14 \ldots \times 3\times 2\times 1 $$
$$ _nP_k \approx 20.9\times 10^{12} $$
The factorial of 0 is 1. So there are almost 21 trillion ways in which the wires could be re-ordered in a 16-pin cable/connector.
If we wanted to allow duplicate elements (repetition) of a permutation, where order does not matter, this is termed a Combination. This is also known as the binomial coefficient of \$n\$, \$\binom {n}{k}\$, a k-selection, a k-multiset, or a k-combination with repetition. The combination formula is:
$$ _nC_k = \frac{n!}{(n-k)!k!} $$
This formula is slightly different from the permutation formula to account for the duplicate elements. The duplicates reduce the number of available combinations, so are factored into the bottom half of the equation.
Ex: a 16-pin connector, all pins of which are used, but the cable only has nine wires. How many unique combinations exist?
Ans: 11,440.
If we wanted to allow duplicate elements (repetition) of a permutation, where order does matter, the formula is:
$$ n^k $$
Where \$n\$ is the range of elements to choose from, and \$k\$ is how many elements are chosen. This (permutation with repetition of \$n\$ chosen elements) is not really a permutation formula at all; it is known as an "n-tuple".
Ex: How many values are possible on a digital cable, where each digit ranges from 0-2 (tri-state), and 13 wires are used?
Ans: \$ 3^{13} \$ = 1,594,323 |
H: Current limiting circuit operation
I cannot understand how this current limiting circuit works.
Image source: UNLV Physics Department, Bill O'Donnell, Current Limiter Circuit
Here is where I get confused:
When Q2 is turned on, the voltage at the base of Q1 becomes close to 0 and Q1 stops conducting current. Therefore VR1 should be 0, which will turn off Q2 and turn on Q1. What am I missing here? If we put an emitter resistor for Q2, shouldn't that solve my issue?
AI: It balances around the voltage drop across R1.
Q2 doesn't just suddenly turn fully on. As current through the load increases, the voltage drop across R1 increases...once it gets high enough (around 0.7 V) Q2 STARTS to conduct. At low enough currents, this transistor is just barely conducting current.
Once Q2 starts conducting it starts reducing the base voltage on Q1.
It stays in equilibrium. As the current goes down, so does the R1 voltage drop, so Q2 conducts less, so Q1 lets more current through. |
H: Powering externally a 3v a 2 relay module
I'm using an ESP8266 to control a dual relay module. As a good practice, and a longer life for the MCU, I wish to power it not using its vcc and gnd pins, but using its JDVcc.
Appreciate explanation whether a 5v input into JDVcc is valid and won't damage the module.
AI: Using 5V as a supply to power a relay coils that are rated for 3V is not valid and it might damage the module. |
H: Very basic circuit question
My problem is in the VA, VB parts is the previous circuit equivalent to this (OPTION A):
Or to this (OPTION B):
AI: Va and Vb are simply voltage sources of a specified value. They’re not batteries. A simulator package will include symbols for voltage sources with settable parameters, and include versions that are 1- or 2- terminals.
I don't know what tool you're using, but some simulators (Falstad, for example) show the 2-terminal voltage source as a battery graphic. As you saw, this leads to some confusion with polarity since the parameter can be any value, positive or negative.
Further, model voltage sources are ideal (infinitely low impedance, infinite current), which we know is not the case with real batteries.
As for your drawing, Option (B) is probably less confusing as the voltage source parameter will be referenced to ground. But it would be even better if you used two 1-terminal voltage sources. |
H: How to redirect electric field lines with a copper plane
I have been testing a capacitive sensing circuit in which an electric field is projected vertically to sense a finger (a grounded conductive object). On the PCB, a tx and an rx are 2 copper structures that act as the 2 plates of a variable capacitor. The nearby grounded finger redirects some of the EF to ground and lowers the capacitance between tx and rx like so:
Now, I have been testing this same circuit with a floating copper plane (notice the difference as this plane is floating, whereas the finger acts as a virtual ground) and I have been explained that doing so will ease the flow of the electric field lines between tx an rx and increase the capacitance (again, notice the difference as the grounded finger reduces the capacitance by redirecting some field lines to ground)
I have been able to get results that match with this explanation. on the image below, the sensor "raw counts", which are inverse of capacity, lower when I lay a floating copper layer on top of the PCB:
The theoretical explanation I was given implies that the floating copper plane pressed against the pcb increases the capacitance between rx and tx as we just seen by (acting as a mirror maybe?) compressing the electric field lines and thus sort of reducing the distance between the capacitor plates (correct me if this is wrong).
My question is: Which laws of physics explain this electric field line "reflecting" phenomenon? Why am I able to reflect and condense the electric field with a copper plane?
AI: An easier way to understand this is to realize that when a copper plate is placed above the circular conductors it's essentially creating a capacitor with capacitance between the three conductors. The middle conductor will be floating at DC, but touch sensors are AC measurements so DC doesn't really matter. The diagram below shows three capacitors, but two of them are actually the same capacitance because of the outer ring.
The nice this is this capacitance can be estimated easily (there is some cross capacitance that is not shown, but the effects will be minimal from this), all you need to know is the distance between the plate, the area of the copper and the relative permittivity of the materials (or air) in between.
\$ C = \epsilon_0\epsilon_r\frac{A}{d}\$
Since there is already a small amount of capacitance between the plates (Cm) adding a copper plate above will create more capacitance and increase the current between the Tx and Rx plates. So a better way to look at it is instead of reflecting RF, you are increasing capacitance.
I haven't looked at the datasheet to know how the sensor data relates actual capacitance, I would guess that a higher sensor number indicates lower capacitance.
(Ctx and Crx could be modeled as a two capacitors in series which is essentially one capacitor so the values could be added together) |
H: Encryption library for an 8051 microcontroller
I've programmed an 8051 microcontroller to send data out through an external transmitter (just a few bytes per transmission). I'm looking to encrypt this data in my microcontroller's program.
I have a 96-bit long key that I'd like to use (ideally) which is known by the transmitter's and receiver's uC.
I’m interested in knowing if there’s any simple encryption technique/library source code in C that you know of and would recommend for this purpose.
AI: You can use simple XOR encryption, and I suppose if you had to use your 96 bit key you can rotate the register every next byte until you get to your last byte of the 96 bits, then you start again, and then on the receiver side just XOR it again to get the correct input. A changing XOR key every byte provides some small protection against brute-forcing the common key if an attacker tries to brute force multiple bytes, as they wont have the same XOR byte counterpart it will be much harder to crack. XOR encryption is very lightweight but its not the best, though with an 8051 it should be somewhat quick to compute and transmit.
There may be other libraries which rely on more complex encryption, although on an 8 bit MCU it's not going to be particularly fast and if you are just worried about having this info protected, then a rolling XOR key should be fine for use with the 8051 unless you have someone who's trying to target your system and break the encryption (which, if they build a program to do so, with a modern GPU it will be very quick to capture the transmitted bytes and brute force every single one (assuming a byte would be recognisable once decrypted) and recover your initial 96 bit key. |
H: Why did cars switch from infrared key fobs to radio-wave key fobs?
Is an RFID key fob less likely to have its signal blocked? Or is it longer range? Or can it be made smaller, or cheaper?
Also, how do infrared devices generate an infrared signal? With an infrared-frequency laser? (I-aser?) I know that microwave lasers (masers) came first, and TV remote controls appeared not too long after that....
AI: RF does not require line of sight. Works better in daylight too. It is also harder to crack as it requires more complex hardware than a $5 learning IR remote.
IR devices use IR LEDs, not lasers.
TV remote controls originally used ultrasonic acoustic transmission in 1950s, and switched the ultrasonic transducers to IR LEDs and receivers in the 1970s. |
H: Is an attenuators power rating for average power or max power?
Context:
I have a transponder that is explicitly stated to output 250W peaks and has been frying some nearby components. The transponder average output is not stated but the average input is 7W and 12W max. I need to do some tests with an expensive spectrometer that has a max input of 20dBm (-10dB) so I can figure out safe ranges and shielding solutions. I've got a proper antenna and have been looking at attenuators to add to the circuit to attenuate about 40dB. Most market attenuators are around '2W' which is where I'm confused.
Questions: Is the 2W label on attenuators max or average power input? How can I find max power input? If I use an 8W attenuator, would this suffice assuming almost all input power of my transponder goes to signal (standby mode is <1W)? How would I search for companies that provide these types of attenuators?
My next course of action is to take readings at a far distance from the transponder but it would be helpful to have some close up readings at around 1m. Any help is greatly appreciated.
AI: I would expect the power rating of an attenuator to be the average power since it is based on the heat dissipation capability of the device. However, in your case, the peak power, 250 watts, is more than 30 times the average power (7 watts). Therefore, it is important to know the duration of the peak power pulse versus the thermal time constant of the attenuator. For example, if the peak power pulse lasts for 5 seconds, and the thermal time constant of the attenuator is only 1 second, then the attenuator will heat up substantially during the pulse and will probably melt, regardless of the duty cycle of the pulse. You should definitely carefully check the data sheet of the attenuator and/or contact the manufacturer to see if your application is suitable for the device. Just Google RF attenuators to get suitable manufacturers. |
H: Power 12v DC actuator when 120v @15 amp saw is switched on
Problem: I want to use a 12v DC actuator to close a blast gate on my dust collecting system when I turn on a 120v 15 amp saw. The actuator can only run for a few seconds until the 6" stroke is reached. If there is a way to close the actuator when the saw is turned off that would be a bonus. Otherwise I will have to have 2 sets of these controls; one to open and one to close when not needed.
I originally thought I could use a current switch to sense the current at the saw switch. I couldn't find the output info of the current switch to know where to go from there. I then thought I could use a time delay interval relay, but am not finding relays where the input is 120v @15 amp and output is 12v DC. I find them the other way though. I have looked at contactors which seem to be 120v in 120v out and I would lose the time delay function. I believe there has to be a way to do this with minimal devices. I would like to do this with off the shelf devices where I don't have to built a circuit board. Lastly, I would like to do this for $50 or less or so.
AI: In order for this to work you need to be sure that your actuator has end stop switches what will cut the power upon reaching the end of stroke, even when power to the motor is maintained. You could use a 4PDT relay with 12VDC contacts and 120VAC coil and wire the contacts in a H-Bridge design to the actuator. You can google How to wire a H-Bridge switch to give you an idea how this is done. You will need a separate 12VDC power supply for the actuator. You then wire the coil to the saw so that when the saw is powered the relay is also powered. When the relay is energized the actuator will move in one direction and when the relay is off it will move in the other direction.
If your actuator does not have end stop switches you will need a current limiting device or motor controller, such as a TR-EM-208 card. These can be adjusted to turn off the power when the actuator reaches the end stroke and caused a spike in motor current. You would only need a SPDT relay if you used a motor controller as you are only switching the control signal to the motor controller. |
H: Is Phase shift limited between -90 to + 90?
In electrical engineering power systems I see several times a voltage or current in instantaneous form, as follows for example:
For the current"
i(t) = I cos(ωt + 3π/4)
And the voltage as follows:
v(t) = V cos(ωt)
However, it's known that the maximum phase shift is:
+90 degrees for a pure capacitor circuit where the current leads the voltage by 90 degrees phase shift.
-90 degrees for a pure inductor circuit where the current lags the voltage by 90 degrees phase shift.
So, how would the phase shift be more, since 3π/4 = 135 degrees?
AI: For an ideal first order circuit phase shift is limited to 90 degrees.
But In general phase shift is not limited to 90 degrees. Second order circuits (some combination of two capacitors or inductors) can have up to 180 degrees shift.
An ideal circuit with just wires driving a capacitor has 90 degrees shift between current and voltage.
But that ideal circuit doesn't really exist. In any real circuit the wires have inductance, and now you have a second order circuit.
And if you increased the detail of the model even further to account for parasitic resistance/capacitance/inductance within the real parts it gets even worse. |
H: If an IC has an external resistor for timing, can that be replaced with an analog voltage out from another IC?
Specifically, I want to control the sample rate of an ISP1820 with an analog out from an Arduino. I know I could drop-in a digital pot but I don't have one and that seems like needless complexity if I could remove the resistor and have the Arduino look at one of the pins on the other chip and math up what the other pin should see.
Or could I put two pins of a transistor to the daughter chip and have the Arduino control the sample rate by flipping the transistor at a certain frequency? I also have a small pile of random opamps, CMOS logic chips and 555s. I have no problem with the 1820 glitching a bit so a kludgey solution would be acceptable.
AI: That is an ISD1820 you have, not an ISP1820.
Here's the schematic of a typical device made with an ISD1820:
The simplest way to change operating frequencies would be to connect two resistors to \$R_{OSC}\$. One end of each resistor to \$R_{OSC}\$, the other to a digital IO pin on the Arduino.
To select a particular frequency, you set the pin for that resistor to output and ground. You set the other output to INPUT. That leaves the digital IO pin floating so it doesn't bother the ISD1820. Do not use INPUT_PULLUP as that will activate another resistor in the Arduino.
That will give you specific resistor values for specific sampling rates on the ISD1820. You can use more pins and resistors for more sampling rates. With multiple resistors, you can connect them in various combinations to get more frequencies.
This won't get you infinitely variable frequencies like a potentiometer would, but maybe it'll do what you need.
A sketch of what I described:
simulate this circuit – Schematic created using CircuitLab
If you set Digital9 to OUTPUT and LOW, and Digital10 to INPUT, then the ISD1820 will use the 6.4kHz sampling rate (10 second recording.)
If you set Digital10 to OUTPUT and LOW, and Digital9 to INPUT, then the ISD1820 will use the 3.2kHz sampling rate (20 second recording.)
If you really want a variable sampling rate, you might try this:
simulate this circuit
Use PWM (analogWrite) on pin D9 of the Arduino. I think the ISD1820 uses the discharge current through \$R_{OSC}\$ to set the frequency. Varying the PWM duty cycle will vary the current from \$R_{OSC}\$.
You'll probably have to play with the component values and the PWM rate to get it to work. The values are a wild guess that may be no where near correct - or maybe spot on. Who knows. The ISD1820 datasheets are really crummy and give zero details on \$R_{OSC}\$.
Try it out. If it works, you could post your own answer with the final values and the code you used (PWM values and so on.) |
H: Pb Free devices
We usually find in datasheets of Integrated circuit in features section the following statement:
*Pb free devices,
*Halide free device,
What is the advantage for an IC to be Pb or Halide free?
AI: Lead (Pb) and Halides can be hazardous to the environment. Since electronic devices will eventually be discarded and could contaminate the environment, many countries have passed rules to prevent their use.
The European Union has some of the strictest regulations, so if a company wants to sell to this large market, they must certify that a product does not contain the substances. |
H: Working of CAN bus
I am working with CAN bus and have a microprocessor with builtin CAN controller (mpc5606B by NXP). I then came to know that I need a seperate transceiver for each CAN controller to establish communication. My question is
1.How a CAN controller and transceiver communicate with each other?
2. Is it possible to establish communication without a transceiver?
Thank you
AI: CAN transceivers are "dumb", you don't really commuicate with them but rather through them. The task of the CAN transceiver is to translate the Tx and Rx signals from the controller (logic voltage levels) into a differential signal consisting of CANHI and CANLO with +/- 2.5V levels.
In this case, MPC56 are 3V3 so you will need a 3V3 tolerant transceiver. I would recommend to pick a modern one like MCP2562/MCP2562FD.
No. |
H: Can I protect areas of a MCU's flash from reprogramming/reading by arbitrary code executing in that MCU?
Consider a microcontroller with self-programmable flash, for example some ATSAM MCU.
I put a bootloader on it, which has the additional capabilities of receiving programs, ex. using USB, in whatever format I use, and storing them in a given part of the flash. It then allows execution of these programs. This is public and anyone can use that feature.
In some given part of the flash, that the bootloader won't overwrite, there's data (could be the bootloader code itself) that I really don't want anyone to be able to read.
Is it possible to somehow protect that flash area from a program loaded through the bootloader that would simply try to read it and report it however they want, ex. using USB ? Are there specific features enabling this ?
For the record I'm coming from the Arduino world and new to the actual MCU world. Thanks.
AI: You need a MCU with security features for this, standard MCUs such as the SAM you mentioned do not provide these kind of security features.
If you have a custom bootloader that allows anyone to upload and execute any code/program they wish, even if it limits uploading to only areas allowed by you, in a standard MCU, nothing prevents that program from doing whatever it wants, like reading the whole memory and send it out via some interface, or erasing the bootloader, or whole memory, and lock you out of it.
Though, the SAM series seem to have boot loader protection fuse bits, so it will disable erasing your custom bootloader, as there is no way to disable that via software mechanisms. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.