text
stringlengths 83
79.5k
|
|---|
H: Can I reduce this AC motor speed?
I spent yesterday opening this fan unit and researching...
I'm looking to reduce the speed by about 25-40%, at the lowest speed setting only.
Please have a look at the photo and let me know what options I have?
Fyi the resistance values is 190, 260, 310
I would prefer not to do the light bulb trick if possible.
Please advise.
Much appreciated
Thanks guys
AI: You can get a continuously variable AC/DC 100W fan that is silent, very powerful and efficient using AC>DC>BLDC 3ph that has a high torque at low RPM which is basically a VFD controlled fan all in one.
Otherwise get a 6uF cap for a microwave oven from a commercial electrical parts store and put in series with the low speed switch position. 2.5uF might reduce the power too much if you only want 30% reduction in power.
FWIW
|
H: hFE value (a.k.a β or gain factor) for transistor, at different voltage
I've been looking into the BC337 NPN transistor, and I saw in the hFE graph on page 3 of the datasheet, that the graph was at 1 V from the collector to emitter. Is there any way to find out the hFE value for a different voltage like 5 or 10 V? I've looked for a datasheet with a different voltage but have been unsuccessful.
AI: The gain at higher VCE voltages will be similar, but not identical.
You don't need to know whether it's 1% or 10% higher, because gain can vary 300% from transistor to transistor, and also varies with collector current, and temperature. You need therefore to design your circuit to cope with a large gain variation. Compared to this large variation, VCE affects gain very little at all.
So given all this gain variation, why do people treat it as if it's constant? If the gain varies by a factor of 2, while the current varies from 100 nA to 100 mA, that is six orders of magnitude, then for an engineer, and for most use cases, that's constant.
|
H: How does a wifi antenna work?
let's consider a common wifi antenna (used in routers) like this:
I don't understand some its basic features:
1) Is it a dipole or monopole antenna? I see only one conductor, so I'd say it's a monopole. But, where is the ground plane? As far as I'm concerned, a monopole antenna needs a second metal object in order to radiate.
Picture taken from here.
The antenna may be supplied with a coaxial RPSMA cable: where does its external conductor go? It should go in the ground plane, but I don't see it.
2) I don't see even the metal conductor, but only a plastic monopole. So, I'd think that the inner part is a conductive monopole. But why is there such a plastic external shield?
AI: Is it a dipole or monopole antenna? I see only one conductor, so I'd say it's a monopole. But, where is the ground plane? As far as I'm concerned, a monopole antenna needs a second metal object in order to radiate.
The first image that you've attached is a rubber-ducky monopole antenna. An open-ended transmission wire (line) when excited/driven by an AC voltage source does act as a radiator but just not efficient. When you bend the two open-ended wires to a certain length, it performs better. The oscillatory voltages and currents produce oscillatory electric and magnetic fields. When the effective bending length is a half-wavelength, it performs still better and there is maximum potential difference at the edges. Current is maximum at the center.
Coming back to the Wi-Fi antenna: consider a quarter wavelength monopole antenna. One of the open ends is grounded and the other end is a quarter wavelength long. Excitation is the same, an AC voltage source. But in this structure, the maximum potential difference at the edges is half of that in the case of a half-wavelength 'dipole'. Consequently, there is not strong enough E-field as in a dipole but this is sufficient.
The first image is of a monopole antenna and its radiation pattern is shown in the following figure. It has an omnidirectional radiation pattern, resembling the figure of the number 'infinity or a dumb-bell'. The mobile phone Wi-Fi antennas are low profile patch antennas embedded onto a substrate and fed internally.
I don't see even the metal conductor, but only a plastic monopole. So, I'd think that the inner part is a conductive monopole. But why is there such a plastic external shield?
Yes. The outer shielding may be for protection against damages. This may be thought of as equivalent to a 'radome' present with parabolic reflecting structures.
|
H: Help with piezoelectric circuit
I connected three piezoelectric sensors to a rectifier which was further connected to a capacitor (100µF). I kept hitting the sensors for some time, and measured the voltage across the capacitor using a multi-meter. It displayed the voltage to be around 6 volts. Then, I used a breadboard to connect the capacitor, a resistor (100Ω ± 5%) and an LED bulb. The bulb didn't light up even for a fraction of a second. I measured the voltage across the capacitor again and found out that it had been discharged.
Any suggestions on how I can fix this?
AI: The bulb didn't light up even for a fraction of a second.
It probably did illuminate but, it would have done so for a small fraction of a second. For a capacitor, we can say this: -
$$Q = C\cdot V$$
And, if we differentiate charge (Q), we get current and rate of change of voltage: -
$$i = C\cdot\dfrac{dv}{dt}$$
So, if the LED is taking about 30 mA\$^{note1}\$, the rate at which the voltage collapses on the discharging capacitor is this: -
$$\dfrac{dv}{dt} = \dfrac{30\text{ mA}}{100\text{ μF}} = \text{300 volts per second}$$
So, assuming the LED needs at least 2 volts to operate and, the capacitor is charged up initially to 6 volts, then the LED lights for about 13 ms (give or take a few milliseconds).
You could try making the 100 Ω resistor a lot bigger (say 1 kΩ) so that there will only be about 3 mA flowing (on average) into the LED and it should still light but, it will still only light for maybe 130 ms. Turn the lights down to see it.
\$^{note1}\$ - The resistor in series with the LED is 100 Ω hence the initial current surge will be 40 mA, but I have used 30 mA as an approximate to the average level over the short time the LED illuminates. We could argue this one way or the other but, at the end of the day, the illumination duration will still be in the region of 10 to 20 ms.
|
H: Can this Eaton T0-2-8211/E changeover switch between multiple sources
I have this Eaton 1|0|2 style changeover switch T0-2-8211/E. Can I use this to switch between 2 different input sources?
My intention is to use it to switch between shore and generator power on boat. It's not going to be used frequently so no automation is required. I just want to be able to switch the input source to my AC distribution panel whenever I leave the dock.
The wiring diagram that came with it looks like this
So I thought I could just wire Input 1 (shore power) to 1 & 3, Input 2 (generator) to 5 & 7, then have the output to the AC distribution panel be 2 & 6.
But when I tried this I had severe voltage drop whenever I applied any load. So it works fine when measured with a AC voltage meter without any current being drawn, but as soon as some load is applied the voltage drops to ~0 and get's very unstable.
What am I doing wrong, is it my wiring, or is this switch not suited for this application? I see it sometimes mentioned as a "manual motor controller" which is not what I'm using it for, but I figured it was just a switch.
AI: Eaton's diagram is far from clear. I suspect that it should be connected as shown in Figure 1.
Figure 1. Possible connection scheme.
I'd disconnect everything and buzz out the switch at each position to confirm rather than test it with shore and generator supplies!
Seems to me ... that #1 is 1 and 3 if you count from top left and down. That way the X in the "puzzle" fits with the drawing on the left, 1,3 and for #1, and 5 and 7 for #2?
Figure 2. My reading.
I read it as
1-2 3-4 5-6 7-8
Posn1 Closed Open Closed Open
Posn0 Open Open Open Open
Posn2 Open Closed Open Closed
And I did check the voltage on each position of the switch, e.g on pos #0 no voltage, on #2 no voltage (generator not running) and on #1 ~240VAC.
Yes but the voltage collapsed when you connected a load so that sounds to me as though you're picking up stray voltage.
Bell it out for continuity.
|
H: Are BLE attribute handles constant?
Say for example you have an embedded peripheral BLE device which has a single BLE service with some characteristics and the initialisation order is always the same (service is added, then characteristic A is added, then B and so on).
In my local testing, the handles passed to the central from the peripheral always appear to be the same. For example, the service has always been 0x000c, characteristic A 0x000d, etc.
This is the case even on disconnect/reconnect as well.
Looking at the Bluetooth documentation, I found this about attribute handles:
3.2.2 Attribute handle
An attribute handle is a 16-bit value that is assigned by each server to its own
attributes to allow a client to reference those attributes. An attribute handle
shall not be reused while an ATT bearer exists between a client and its server.
Attribute handles on any given server shall have unique, non-zero values. Attri-
butes are ordered by attribute handle.
An attribute handle of value 0x0000 is reserved for future use. An attribute han-
dle of value 0xFFFF is known as the maximum attribute handle.
Note: Attributes can be added or removed while an ATT bearer is active, how-
ever, an attribute that has been removed cannot be replaced by another attri-
bute with the same handle while an ATT bearer is active.
With this in mind, does Bluetooth allow clients/central to force the server/peripheral to re-initialise their services and characteristics?
Or is it safe to assume that once my user code initialised the services and characteristics, they will always have the same handles?
AI: With this in mind, does Bluetooth allow clients/central to force the server/peripheral to re-initialise their services and characteristics?
Yes.
You can tell by the presence of the "Service Changed" characteristic in the Generic Attribute service. This way the peripherial can tell the central when to re-run discovery.
|
H: Why does the frequency of carrier signal not matter when we impose the original signal on it?
I know we use modulator to change the frequency of original wave, thus reducing the size of antenna. As I understand the frequency of carrier signal does not matter . But I don't understand why.
AI: The frequency of the carrier does matter. You need to pick one that's suitable for your antenna, and doesn't interfere with, or get interference from, neighbouring services.
However, at the receiver, when you demodulate the modulated carrier with respect to the specific frequency you chose at the transmitter, you remove the effect of the initial choice. You could choose to transmit with 100 MHz or 500 MHz carrier, and as long as you convert that signal down with the equivalent of a 100 MHz or 500 MHz reference, the two systems will function the same.
|
H: How does a RCBO trip from a vapourised PCB trace?
I was replacing a power board with a new (plastic insulated) one, which resulted in a trip of the 20A 30mA RCBO on the circuit instantly upon turning on one of the board outlets. The built-in 10A breaker on the power board did not trip.
Since all loads previously worked fine on the old power board, I assumed something was wrong with the new power board and begun to investigate.
I found that inside the power board, both active and neutral is routed through a PCB, and a trace ~1mm wide on the active side of the outlet had vapourised and covered the area in a thin layer of black dust.
A 1mm wide trace is clearly too narrow to carry the rated 10A of current of this board, so it doesn't surprise me that it blew up. (The rest of the trace was 5mm wide, but the 1mm part was next to a slot. So I assume it was a misalignment during manufacturing, where the slot cut into the trace more than intended)
However, how did it trip the RCBO?
As far as I know, the RCBO should only trip instantly on either ~30mA of leakage current, or >100A of overcurrent.
So I considered the options:
Ground fault in outlet: The trace that blew up was nowhere near the ground line (~10cm of separation). I also checked the continuity around the blown-up trace (active) and ground line, and it was definitely open. Neutral to ground was also open.
Short in outlet: Area around blown up trace (active) to neutral was also open. And the 10A breaker on the board did not trip.
Fault in load: The load that was connected to it previously worked on the old power board, and continues to work in a different outlet.
So right now, I'm guessing that as the 1mm trace got vapourised, it greatly lowered the ionisation potential of the air inside the board, and formed a conductive path between active and ground (via arc or corona discharge), causing a ground fault and tripping the RCBO. After the dust settled, the conductive path disappeared.
Does that even sound possible? Is there a less outlandish explanation of why the RCBO tripped?
EDIT: I've now repaired the blown-up trace with a thick solder bridge, and now all outlets on the board work as normal. Which is more evidence that the trace vapourisation was what caused the trip
Actual sequence of events for context:
Switched off all outlets on both old and new power boards, and at the wall
Replaced the old power board with the new, and plugged everything in
Switched on the outlet at the wall, and on the board slowly one-by-one (downstream devices automatically start drawing power)
RCBO trips instantly upon switching on the last and highest downstream load (~1A continuous; unknown inrush) outlet
Leaving outlets on the on position, I try to reset the RCBO, but it continues to trip
Switched off the "offending" outlet on the power board and reset the RCBO, and it now stays on
Try switching on the "offending" outlet again, and the RCBO doesn't trip, but there appears to be no power coming from the port anymore (vapourised trace is no longer conductive now)
Moved the "offending" load to a different outlet on the board, and switch it on, and this time everything works with no problems
Disconnected everything, opened up the power board, and found the vapourised trace
Cleaned the black dust off, reverse engineered the board, and started probing around
AI: A dead short could easily cause tens to hundreds of amps to flow, which would trip the breaker pretty darn quickly, probably much faster than the unknown "fuse" in the the power strip. Such a current spike could also in theory cause induced currents on the earth line and trip the RCD portion of the breaker.
Given the breaker wouldn't switch back on until you switched off the offending socket, there was most likely there was a dead short from live to neutral on the switched side which allowed such a high current flow.
My advice would be to return the power strip to where you bought it from, or simply dispose of it (preferably via your municipal electronics waste stream or equivalent). I would not trust such a device to not cause more problems even if it seems to be working now.
|
H: When using an open-drain output to drive an NPN transistor, can the base resistor be replaced by a pull-up?
Assuming the value of R1 is the correct value for the base resistor for the given motor L1, is it okay to omit the additional base resistor, used in push-pull applications, for an open-collector output?
To my understanding, when the open-collector output goes low, the low impedance path will not affect the transistor negatively, as there is no/minimal potential difference between the open-collector output directly connected to the base and ground, therefore turning-off the transistor.
This is my proposed solution using an N-channel MOSFET, based on the helpful answer of @Spehro Pefhany:
I'd much appreciate feedback on this proposal. Will it work with a good margin both at 5V and 12V motor (L1) voltage and over a temperature range of 10°C to about 85°C?
These are the links to the datasheets:
N-channel MOSFET:
https://assets.nexperia.com/documents/data-sheet/2N7002BK.pdf
Temperature sensor IC with open drain thermostat:
https://www.ti.com/lit/ds/symlink/tmp101.pdf
AI: Maybe, but you need to ensure that the Vol from the open-drain output guaranteed to be sufficiently low to ensure that the transistor is fully off under worst-case conditions with the value of R1 that you need to properly drive the base (typically something like the relay coil current divided by 10 or 20).
A guarantee of 300mV is usually sufficient.
Otherwise, you can use a pullup resistor and a voltage divider to the base, or maybe use a MOSFET, which could allow a higher value of R1.
For example, here are the typical characteristics of the open-drain outputs on the STM8S103F3 microcontroller datasheet (that I happen to have open at the moment):
None of the guaranteed numbers are directly useful, but combining the two it looks like 3mA would be safe with a 5V supply, so a 30-60mA relay could be driven as suggested.
|
H: ATXMega ADC - What is bandgap reference and temperature?
On my ATXMega128A1U, I am using ADCA to measure an input voltage (roughly ~1.70V) and I plan to use a VREF of 2.5V. I was reading the XMEGA manual, specifically for the REFCTRL register where I would set REFSEL to 010 for AREFA as seen below:
However I am a bit confused by what the bandgap and temperature bits are for as in I am not sure whether I need to enable them or not. I have seen some other projects set these bits but I was wondering what exactly does it do? What happens if I don't set these bits at all and use the ADC?
Edit: Link to XMEGA manual http://ww1.microchip.com/downloads/en/DeviceDoc/doc8077.pdf
AI: The bandgap reference provides an internal reference voltage of 1.1 V; you can then use it by setting REFSEL to 000. You don't need that, you want to use an external reference voltage, i.e. REFSEL 010 or 011. It doesn't matter whether you enable it or not – as you're not using it. Turning on the bandgap will consume a little power, but typically not significantly much.
Many microcontrollers have a temperature sensor built-in. Bit 0 enables the one in your microcontroller. You can then use the ADC with a special ADC channel to measure what that sensor says. Unless you plan to use that sensor, it doesn't matter whether you turn it on. Again, turning it on will use a little power.
|
H: Determine if phototransistor is in cutoff
I'm attempting to determine if the following phototransistor HOA1870-031 is in cutoff.
The HOA1870-031 we have in stock have the cathode of the IR emitter tied to the emitter of the IR detector (phototransistor).
What I want to do is detect whether the phototransistor is in cutoff (the light between the IR emitter and IR detector is blocked).
To do this I'm reading the voltage with an analog input (PA1) which is connected internally to a comparator (COMP1) in the STM32F303VC.
When the phototransistor is saturated I read a voltage of ≈230mV after the voltage drop across the 10KΩ resister.
When the phototransistor is in cutoff I read a voltage of ≈2.9V (VCE ≈ VCC ≈ 3V).
I set the inverting input of COMP1 to VREFINT and the non-inverting input to PA1.
I'm also reducing some noise via the configurable hysteresis for the comparator.
I've tested it and things seem to be working as expected, when the light is blocked between the IR emitter and the IR detector the ISR I configured is fired.
Although I did go to school for EE I've been writing software for so many years that I've forgotten most of what I had learned (which really sucks).
I dug out all of my old text books which helped me come up with the circuit you see below.
My question here is, am I doing anything obviously wrong?
Is there a better way to accomplish this (detecting if the light is broken between the IR emitter and IR detector)?
AI: That all looks fine.
You have current limiting on your LED. If it's an infrared LED it will have a forward voltage drop of about 1.4 V which leaves 1.6 V across its resistor. The current will be V/R = 1.6/100 = 16 mA which should be fine.
Your 10k resistor on the transistor will limit the maximum on current to 3/10k = 0.3 mA which is also fine.
I would have suggested using hysteresis but you've addressed that already.
|
H: In an amplifier, does the gain knob boost or attenuate the input signal?
I'm thinking specifically about guitar amplifiers, but I'm guessing this applies to all audio amplifiers.
I understand the practical function of a gain knob, but what's not clear to me is whether the gain knob is simply a potentiometer that attenuates the incoming signal, meaning if the gain knob is all the way up, it's simply letting the signal through unaffected, or if the gain knob controls some sort of boost circuit that actively increases the input signal strength before sending it to the preamp.
Or does the knob somehow directly affect the operation of the preamp making it provide more amplification or something like that?
AI: It could be either when determined by marketing. Two common configurations are shown below.
simulate this circuit – Schematic created using CircuitLab
Figure 1a: Fixed gain pre-amp with preceding attenuator. 1b: Variable gain pre-amp.
1a is simple but has the disadvantage that the noise from OA1 is constant even with the input turned to zero.
1b has the advantage that as the gain is reduced so is the noise.
(Gain in the example circuits is -R2/R1 or -R5/R4.)
|
H: Why does this current not match my multimeter?
Newbie to electronics here! I have an Arduino Uno with a 15 Ω resistor connected to the 3.3 V pin, on my multimeter, it gives me 3.2 volts.
If I use this calculator:
https://ohmslawcalculator.com/ohms-law-calculator
And place values of 3.2 volts, and 15 Ω resistance, it tells me that it should be a current of 213 approx.
But when I test this with my multimeter, I get a current of 161ma approx.
Could anyone please explain why this happening, and what I am not calculating correctly?
Thanks.
AI: The impedance of your multimeter will be a few ohms. Add that to 15 Ω and if it's approximately 5 ohms, the circuit current will be 161 mA. In other words. 3.2 volts / 20 Ω = 160 mA.
|
H: Debounce slide switches in verilog?
I just created my first FPGA project. I created a small FPGA PCB with some slide switches to input values to the FPGA. Sadly, I did not debounce them properly with hardware, as you can see in these oscilloscope images:
How can I debounce the switches in "soft"ware on the FPGA? I found some examples online, but they didn't work for me. My internal clock is 16MHz.
Edit:
I am using a TinyFPGA Bx module. According to the datasheet, the inputs are built with Schmitt-Triggers. Looking at the images of the osci, those „steps“ are weird behaviors between slide switch and input. For hardware debouncing, I used a 10k resistor and 10n capacitor. The wierd voltage rise / fall creates some wierd Fpga behavior. Why could that be and how could one filter the signal to be „just one jump“?
Edit:
Solution: My clock was not unbounced. I solved it using Verilog.
AI: I doubt you have issue in your Verilog code , and you say that weird behavior.
The oscilloscope images seems normal.
Try this simple example :
Switches to LEDs
`timescale 1ns / 1ps
module top(
input [3:0] switches,
output [3:0] leds
);
assign leds = switches;
endmodule
Remember to add constraint file for the switches and LEDs.
Still if you need to debounce switches, The easiest way to debounce switches / Push buttons in Verilog is to use Slow clock.
In order to achieve a slow clock , you have to use a clock divider , divide your 16MHz clock into something slower like 8MHz or 4MHz. With a slower clock you'll be able to catch the event.
Read this Article
|
H: Can't get WS2812B LED strip working with PC power supply
I am an absolute beginner in both electronics and LEDs.
I would like to power my WS2812B LED strip (5m, 300 LEDs) with an old PC power supply.
The power supply has 600W so more than enough power.
To trick the power supply into thinking it is connected to a mother board, I cutt the green wire from the 20 pin connector and connected it with a black wire.
After that I cut a red and a black wire and measured the voltage using a multimeter.
I got 5V so everything works as expected.
But if I connect it to the LED strip nothing happens. I expected some kind of reaction from the LEDs.
What I am missing here?
AI: WS2812 is IC with internal 3 color LED driver and LEDs inside a case. By default then turning on power no lights supposed to be on. Some data should be pushed inside. And specific communication protocol should be applied through serial interface. For each chip 24 bits in sequence, which define brightness of each color. No data - no lights.
|
H: Incremental encoder - polling or interrupts?
I took over a project including PIC18f25k80 microcontrollers and incremental encoders. Basically I just need to follow the position of the motor using the encoder, and drive a device to specific locations. I know that there are more methods to read an encoder and update the position. The motor does not turn fast at all, no high speeds are involved here.
If I were to develop the whole thing from scratch, I probably would hang one channel of the encoder to an interrupt pin on let's say port B, and in the ISR I would check for the state of the other pin. To me, this seems to be the cleanest solution, this method probably catches all position changes, correct me if I am wrong.
Another method would be to poll the 2 channels in the main loop. There is not much in the main loop, basically just adjusting the motor speed and direction based on the position, so at slow speeds, I guess that I would not miss position changes, but it bothers me, that it is still a possibility to miss changes in the encoder channels.
A third option could be to initialize a timer interrupt, and in that interrupt routine, I could check for the pin states. This interrupt could occure twice as fast as a state change would occur at the highest speed, so I guess I would not miss any changes with this method as well.
Now that being said, my question is as follows: which option should I chose?
As I said, I would go with the first one, the encoder channels are attached to RB4 and RB5, so I could just enable the interrupts on port B. But as far as I know, and this is where I am not sure, I can only enable the interrupt for the whole port B on a PIC18f25k80, and not for single pins. That means that all pins on port B would create an interrupt, not just the one I want. And not just the other channel of the encoder, but all of the pins on port B are attached to something, and I cannot change the layout, only the software. Do I see it right, that in this case this method is out, because all pins would cause an interrupt on port B and that would be too much overhead?
And the end of my question: if I see the above problem correctly, which method would you chose, polling of fixed timer interrupts? We are talking about 5 rotations per second tops.
Thanks for the answers in advance!
This is the encoder in question: https://www.power-tronic.com/wp-content/uploads/2019/11/Type-Magnetic-Incremental-2019-01-08.pdf
AI: Now that being said, my question is as follows: which option should I chose?
Poll at "high" speeds where the poll has a likely chance of catching an event rather than being wasted, and interrupt at low speed. Both methods suck equally though.
A third option could be to initialize a timer interrupt, and in that interrupt routine, I could check for the pin states. This interrupt could occure twice as fast as a state change would occur at the highest speed, so I guess I would not miss any changes with this method as well.
No! No! Timer interrupt is pointless. You might as well just poll and not have the overhead of the interrupt context switch. You would just have to make your main loop tight, or insert multiple polling points in your main loop. If you are going to the trouble of interrupting, just interrupt off the directly GPIO.
If I were to develop the whole thing from scratch, I probably would hang one channel of the encoder to an interrupt pin on let's say port B, and in the ISR I would check for the state of the other pin. To me, this seems to be the cleanest solution, this method probably catches all position changes, correct me if I am wrong.
No, use a different PIC with a QEI. Might not be too late to switch for a code and pin compatible PIC.
EDIT: I actually looked at your encoder datasheet. Your encoder is super low res. I'm used to encoders with thousands of pulses per revolution. For you, go direct GPIO interrupts, no question.
|
H: simple way to send addressable RGB signal wirelessly
So my electronics knowledge is pretty limited, But I'm hoping there is a simple way to do what I want. Also I'm sorry if this question is answered somewhere else as I'm probably not using the right search terms or something.
I have an addressable RGB LED strip controlled by an Arduino. It has a 4-wire connector. Basically I want the strip to be wireless. Is there a simple RF transmitter and receiver I can buy where all I need to do is connect the TX to the Arduino and the RX to the strip (and a battery)? The distance would only need to be 3 to 6ft. Obviously I retain the ability for the LEDs to be individually addressable and RGB.
Would something like this work? I'm guessing I'd only need to connect the LEDs to 2 of the pins? since, of the 4 wires, 1 is ground and an other is the 12v input. And I'd need to provide a separate 3.3v for the transceivers. But I wouldn't really know which pins I'd need to connect.
Thanks!
AI: There are various ways to achieve this. In all scenarios I can think of, you would leave the Arduino (or an Arduino, not necessarily the one you have) in control of the RGB lights.
You then transmit some data to the Arduino to tell it what colour to set the LED strip to, and to turn it on or off, or start some other procedure.
The easiest way to do this is probably with the Arduino IoT Cloud and a compatible IoT Arduino board (eg the Arduino Nano 33 IoT). Noting that the Nano IoT is a 3.3v board. Alternately you could use any other IoT board with an IoT breakout board, or my preferred option would be to wait for the new Arduino Nano RP2040 which should be perfect for your use-case.
You could also use a similar board like an ESP32, but they're a little less straightforward to use which, although not really complex, sounds like it might be a little above the level you want at the moment when you're already introducing new skills and technologies.
|
H: On off switch with 2xLED
I'm trying to connect an old/possibly broken switch to a light from a 12V battery (very good first question isn't it?). The switch is meant to light Green when ON mode and Red in OFF mode.
The schema (as I traced it) looks like this but actually looks like shown at the bottom
My first question is, is the schema correct? I went several rounds tracing and came up with the same but it looks wrong i.e. when switch is ON there seems to be a short circuit but that obviously can't be right. So I believe the pins should be GROUND, +VE and CONTROL. But this doesn't align with what's printed on the circuit. So which one is correct?
My second question is, how can I wire this to a 4pin relay (to connect to a car battery)? e.g. something like this?
AI: simulate this circuit – Schematic created using CircuitLab
Proof by sim
Updated Request : Solution
—-
If you can modify the wiring to look like this, it will work for the relay.
|
H: Generic 433MHz receiver, how to derive output signal voltage
I have a 433MHz receiver modules (Model number MX-05V) The input voltage is listed as 5V but they don't list the output voltage.
I have seen this used without level shifting in a project in conjunction with an ESP32 module that requires 3.3v signals.
In researching how this was safe I found an answer that suggested that the output voltage would be input/2 but I'm unsure how this or why this is.
Unfortunately the answer didn't provide detail, it just happened to mention it in passing.
(https://electronics.stackexchange.com/a/103357/274778)
In short, my question is in two parts:
What is the signal HIGH voltage (in theory ~2.5v?) for this receiver?
Why is it that?
Below is the schematic for the part and a link to a product listing for it, note that on the listing the schematics are labelled backwards.
It's listed as a superhetrodyne receiver and what I take to be the output circuit makes use of a duel opamp (LM 358) but my understanding is too flakey to derive the output voltage from this arrangement.
Links to example of part:
https://hobbycomponents.com/wired-wireless/1054-433mhz-transmitter-receiver-modules-with-antenna
https://hobbycomponents.com/wired-wireless/615-433mhz-wireless-receiver-module-mx-05
AI: the output circuit makes use of a duel opamp (LM 358) but my
understanding is too flakey to derive the output voltage from this
arrangement.
LM358 datasheet page 10:-
This tells you that maximum output voltage is Vcc - ~1.2 V = ~3.8V with 5 V supply. The ESP8266's rated maximum I/O input voltage is 3.6 V, however it probably has protection diodes that don't start conducting until pin voltage goes ~0.6 V above the power supply voltage, ie. 3.3 + 0.6 = 3.9 V.
So people who are using this combination without level shifting are just getting away with it if the power supply voltages are accurate. To be safe you should lower the output voltage by at least 0.5 V using eg. a voltage divider made from two resistors.
|
H: 2 x 90 Ah 12v deep cycle batteries in series runs 77-90w for under 20 minutes. Why?
I am trying to understand the maths here. I have 2x90Ah 12v batteries in series (24v) and they are powering my computer. The peak power is 120w while the computer is shutting down during a power loss. I get less than 20 minutes run time.
As far as I can tell, when I do the calculation of how many amps it draws I get somewhere around 1A at idle and up to 5A when it is in full draw.
Am I correct to assume that if I am right, at full draw at 5A I should be getting many hours of power out of it? Or at least an hour+? Are my calulations wrong or is there something else going on here?
Thanks!
AI: Per comments, it sounds like you observed a 1V drop in the wiring under moderate load. The number one problem I see with inverters (A UPS is basically an inverter) is that the DC wiring is too small. As current flows through this narrow wire, voltage drop occurs. This is especially obvious during heavy loads. The inverter (or UPS) doesn't know that the voltage drop is due to wiring. It thinks the battery voltage is low and gives a warning or shuts down.
The best way to avoid this is to use adequate sized wiring. The size depends on both the length of the wire run and the diameter of the wire.
Let's go through a quick example. If you have a total of 20 feet (including all wire, both directions plus whatever wire connects the two batteries) of 12AWG wire, how much resistance is that and what voltage drop will it cause?
First, find the resistance of 12AWG wire online in a table:
http://bnoack.com/index.html?http&&&bnoack.com/data/wire-resistance.html
That table shows 1.7 Ohms for 1000 feet of AWG 12 wire.
So for 20 feet that will be 1.7 * 20 / 1000 = 0.034 or 34 mOhms.
Now let's say the current is about 6 Amps (to approximate your 120 Watt load, and allowing for efficiency of the UPS). The voltage drop will be equal to the wire resistance * current:
6 Amps * 0.034 Ohms = 0.2V
That may or may not be enough to cause a problem at 6 Amps. But for a larger load, like 20 or 30 DC Amps, it would definitely be problematic. This is why even small inverters typically have large DC wires.
Moving the batteries close to the inverter also helps a lot.
A final note: The termination of the wire (crimp, solder lug, etc) can often become a point of high resistance, especially when currents are high (10, 20, 50, 100 Amps). It is a good idea to monitor the temperature of high current connection points to make sure everything is OK. They can actually get so hot they become a fire hazard.
|
H: What does "Not recommended for new designs" mean in an ATtiny datasheet?
I am making my own circuit and I am using ATtiny10-TS8R (I couldn't buy any another type for now), but I found (6) in the datasheet on the part I want to use:
As noted in point (6), it says "Not recommended for new designs".
I used it already and the circuit worked very well, but I am willing to go to mass production, so, is it safe to use that one (TS8R) or may my circuit face problems in the future?
AI: Not recommended for new designs means just that - they want to remove that SKU and are warning you that it will happen soon. If you are producing a large volume (often not actually that big, >10K), they might continue manufacturing for you on a contract basis.
This warning suggests you should move to a compatible or newer microcontroller for a design you want to start producing now and into the future. In this case, it would be the "ATTINY10-TSFR" if you want to have the same temperature range. If you can secure enough stock of the obsolete part for your purposes, then go for it, but be aware they might not be available in the future.
|
H: Why should we use 3.3 V instead of 3 V?
Why do we use 3.3 V in circuits? Why not 3 V? Is there a specific standard? Or do we use it because it was 3.3 V from the beginning?
I am just curious to know.
AI: Why do we use 3.3V in circuits? Why not 3?
Why not \$\pi\$ V? Why not 2.8 V, a much nicer number than 3 V? The more things like power consumption or speed matter, the less you align with "human-pretty" numbers, and more with physical needs.
In this case, the physical need is actually "something slightly above 3V, but less than 5V, to save power in our new LVCMOS circuits", ca 1970.
Point is: when TTL (transistor-transistor logic) was still the dominant technology for integrated logic, supplies lower than ca 4.5 V were impossible, due to large collector-emitter voltages in the bijunction transistors used there. Hence, with a bit of headroom, 5.0V became a standard.
Now, CMOS was introduced, and it could work well with supply voltages down to ca 3 V. People wanted to give a little headroom.
So, my guess here is why it's 3.3V, and not 3.15V or 3.4 V: they picked a voltage for which there were already voltage regulators ready, in the drawers: 3.3V, which had, interestingly, already been (one) supply voltage of the Apollo Guidance Computer, so NASA and early semiconductor companies had poured in money into building these.
TL;DR: > 3 V: Need that, 3V is just a tiny bit too low for reliable CMOS logic gates at the time of invention. 3.3 V: probably because hardware for that voltage already existed in the early 1970s.
|
H: In SR latch when apply a pulse to reset latch how the Q is changed?
Here is the initial state of SR latch both reset and set inputs are zero.
Here we apply a pulse to the reset input and it shows it changes in this way
first -
and then -
The thing that I don't understand how R pulse can change Q state without receiving feedback signal from second NOR gate, and second NOR gate must receive feedback signal from first NOR gate which one of them happens first?
AI: The truth table of a NOR gate tells you that the output will only be 1, if both inputs are 0. As soon as one input gets to logic 1, the output switches to a logic 0.
When R is set to 1, the condition of both inputs being zero is no longer fulfilled -> The output goes immediately low to 0.
At this time the lower NOR gate still has to wait for the logic 0 to appear at its upper input - until than it will stay low. But as soon as the higher input sees the 0 from Q, the gate has two low 0 as input and will switch the output to 1.
|
H: Why the euclid distance between the plates of the capacitor has been doubled to find out the capacitance?
I may have been asking of an easy problem.
The problem requires to find out the capacitance.
The ideal capacitor with area of \$S\$ exists and the euclid distance between the plates is \$d\$.
The dielectric with permittivity of \$\epsilon\$ has been filled in the half right side of the space between the plates and the remaining half left side has been filled by vacuum(of course the permittivity of it is \$\epsilon_0\$).
The textbook solution says that the capacitor of the problem is equivalent to the connected 2 capacitors in parallel.
One of the plates of the parallel capacitors has been filled with the dielectric and the other has been filled with vacuum.
\$C_0:=\frac{\epsilon_0 S}{2d}\$(capacitance of the capacitor with vacuum of parallel)
\$C_1:=\frac{\epsilon S}{2d}\$(capacitance of the capacitor of the dielectric of parallel)
And my doubt has came from here.
Why the each denominator has \$2\$?
I thought that the each denominator must be \$d\$.
And the capacitance of the answer \$C\$ is given by \$C=C_0+C_1\$.
Can anyone tell me the hints so that I can resolve the doubt on my own.
\$\$
AI: I'll just give a hint for the moment.
If we wrote it as
$$ C_0 := \frac 1 2 \frac {\epsilon _0 S} {d} $$
$$ C_1 := \frac 1 2 \frac {\epsilon S} {d} $$
would it make more sense? Can you see where \$ \frac 1 2 \$ would come in?
|
H: What do the markings 40/100/56/B mean on this capacitor?
I am trying to replace a steam iron capacitor which has the following markings
I know that it's 0.68 micro Farad with 10% tolerance, but I can't seem to understand the 40/100/56 marking. There's replacements online with 40/100/21 so I'd like to know if they're suitable.
Thank you.
AI: That is the climatic category data for your capacitor.
Your capacitor is rated from -40 to +110 degrees C. The 56 means that your capacitor will be OK if operated at 95% humidity and 40 degrees C for 56 days.
The "B" might be the flammability rating, as described in this document from Vishay (see page 5.)
Given that you mention a steam iron, I think you'd be better off finding capacitors with the same climatic rating as the original.
That capacitor is a safety capacitor - that's the "X2" marking.
Make sure that any replacement is also rated X2 and also has the same voltage rating as the original.
|
H: Understanding voltage drop on a transistor
I've been reading different answers online and am trying to get my head around calculating transistors.
I have a circuit like this:
(Imagine the base goes to a microcontroller and is connected properly.)
After the red LEDs, the voltage will drop to 3.2V.
Will the transistor (2N2222) drop any more voltage?
If this circuit goes to ground (remove the transistor) I've calculated about 15mA goes to each LED. If there was more voltage drop, would I need to reduce the resistance?
AI: If the transistor base current is adequate the transistor will be driven into saturation and there will be about 0.2 V between the collector and emitter. This is small enough that it can be ignored in many calculations such as this.
That leaves about 3 V across the resistor so I = 3/75 = 40 mA shared between the three LEDs. Generally direct paralleling of the LEDs is avoided because the currents will vary quite a lot depending on their individual forward voltages, Vf.
See what I've written in Variations in Vf and binning for more on the topic.
|
H: A mistake in "Op Amps for Everyone" about op amp attenuators
In Texas Instruments' book "Op Amps for Everyone" (5th edition, link, see page 347), in chapter 25 "Common Application Mistakes" I believe there is a mistake (ironically).
It states that it's wrong to use a unity-gain stable opamp for an attenuator like this:
I believe the statement is wrong, because that attenuator have the noise gain of \$1+\frac{R_F}{R_G}\$ which is always \$\ge 1\$. Opamp stability is determined by the noise gain (i.e. the gain with respect to the non-inverting input), not the gain of a desired signal (opamp doesn't know what the desired signal is). There is even a compensation technique which increases the noise gain preserving the signal gain (e.g. add a resistor between the opamp's inputs in fig. 25.1). Surprisingly, even in the same book in chapter 7, sect. 7.4, there is a highlighted statement, quoute:
Several things must be mentioned at this point in the analysis. First, the transfer
functions for the noninverting and inverting equations, (7.13) and (7.18), are different. For a common set of \$Z_G\$ and \$Z_F\$ values, the magnitude and polarity of the gains are
different. Second, the loop gain of both circuits, as given by Equations (7.15) and
(7.19), is identical. Therefore the stability performance of both circuits is identical
although their transfer equations are different. This makes the important point that
stability is not dependent on the circuit inputs.
It means that an inverting amp (like in fig. 25.1) has the same stability as a non-inverting amp with the same resistors (as if \$V_{IN}\$ be grounded).
Also, I sometimes see attenuators like in fig. 25.1 in real-world circuits designed by professionals. For example, in Agilent 6060B there are some examples, e.g. an inverting amp with the signal gain of \$-\frac 16\$ (the noise gain of \$1+\frac 16\ge 1\$):
Does this a real mistake in the book (survived for the 5th edition!), or I miss something?
AI: TL;DR
It's a mistake in the book.
Details
After some googling (and pirating for different editions of "Op Amps for Everyone") I found that the original editor of "Op Amps for Everyone" (1-2 editions) was Ron Mancini. These editions doesn't contain the caution about \$R_G > R_F\$.
At 3rd editions the editor of the book was changed to Bruce Carter, and "Texas Instruments" label from the title is gone. He placed that caution in many places around the book. He didn't remove the old material though, like the quote above about inverting and non-inverting circuits has the same loop gain (so the stability), which directly contradicts with the Carter's caution.
There are some other discussions on web, e.g. this with Ron Mancini itself, and this where Michael Steffes (from Texas Instruments) wrote:
This actually was a classic mistake by Bruce Carter that Mancini seems to paper over for some reason. Somewhere Bruce got the idea that an inverting configuration had a noise gain of Rf/Rg instead of the correct 1+Rf/Rg. Bruce Trump and I wasted about an hour one day trying to explain that error to him with no luck.
|
H: Different answer from power equations
Imagine a circuit like this: 6 red leds with a Vf of 1.8V are wired in series with a resistor. Vcc is 12v.
To acheive a current of 10mA, you would do (12-10.8)/0.01 = 120 so 120 ohm resistor.
Now to calculate the power dissipated over that resistor I have three options:
P = VI = 12 * 0.01 = 0.12W
P = V^2 / R = (12)^2 / 120 = 1.2W
P = I^2 * R = (0.01)^2 * 120 = 0.012W
So, which one is it?
AI: Now to calculate the power dissipated over that resistor I have three
options...... So, which one is it?
The correct answer is #3 because #1 calculates the total power dissipated in the resistor and LEDs and #2 is just plain wrong because it assumes 12 volts is across the resistor.
|
H: Touch sensor with a CR2032 battery and a transistor
I'm not an electronics engineer but I have a question that may look strange to a specialist:
Can I make a touch sensor that lights a LED using a CR2032 battery?
I tried to use this circuit without the resistances and with a different power source but the LED is not at full power.
Thank you.
AI: You could play with something like this:
simulate this circuit – Schematic created using CircuitLab
The CR2032 has enough internal resistance that the LED does not burn out.
There's a good chance M1 could be damaged by touching the gate but if you touch the (-) battery terminal first it should work for a while.
Eventually the gate leakage will cause the LED to turn on or off (by itself) after some time.
|
H: What is the resistance of this DC fan?
I opened and took out some components of an electric source of a desktop computer that doesn't work anymore (the source not the computer, I bought another source). One of them is a fan DC 12V: HA1225M12S-Z.
When I supply 12V to the welded terminals of the other side of the PCB, where it is plugged, the fan works normally.
My doubt is the electrical resistance of the fan. When I measure it with the \$\Omega\$ function of my multimeter, I get \$20k\Omega\$ what doesn't make sense, because it is written in the fan the current of \$0.45A\$.
Probably the measure is wrong, but on the other hand the multimeter seems to give good values for other resistances.
The function of measuring continuous current is not working, so I can not check the resistance by applying a voltage and checking the current with the fan locked.
I could not find the specification of the resistance of this fan in the web. Am I right to suppose that it must be \$26.7 \Omega \left(\frac{12}{0.45}\right)\$?
AI: Fans often found in computers are DC brushless fans. They have drive electronics. So as an electronic load, they are not pure resistances so measuring the resistance is meaningless, except for determining if they have a short or open connection.
|
H: Can I connect the LoRa Module pin to SMA Antenna with wire?
I have RFM95W LoRa Module and an antenna with an SMA connector (you can see in the image below). The module does not have an SMA connector. There are just pins. Can I connect the SMA connector with some wire soldering to the LoRa module? Will this make any communication problem?
AI: It's not ideal but if you keep the connections as short as possible then the losses will be minimised. You could use a short length of 50-ohm coax, although that would probably be no better since you'd have two connections rather than one. Electrically the best option would be to solder the SMA connector directly onto the module, although mechanically you might prefer to mount the connector rigidly and then use some very short wires (5mm). Normal hook-up wire will be ok, you don't need anything special.
|
H: Buck converter ouptut zero voltage
I've designed PCB which uses TPS54531 to generate 5v from 12v input. The circuit as below is similar to the reference circuit from TI.
After powered on, the VCC5_TMP read abnormal value 0 instead of expected 5v. Then I measured PH(pin8) as below which is also incorrect.
After that, I measured the VIN(=12v) and EN(>1.25v) which are all good.
After these measurements, I still can't get the reason why 5v can't be generated correctly. The power supply and the EN control are all correct and all the other components are selected as reference circuit suggested. Can someone help me to find the error in this circuit? Thanks.
[Update]
The problem is diode D2 is welded in the wrong direction. After I fixed it, everything is ok now.
AI: Based on the symptoms, it is likely that diode is incorrectly mounted.
|
H: Electronic components identification
In the picture below, I have a power resistor, which is powered either from 12VDC or 24VAC from the rest circuit, as well as another component, which is inserted on the one cable leading to the resistor.
My questions are:
What does each number mean on the power resistor marking?
What could the other component be?
Edit: This is a CCTV camera and the power resistor is used to make sure, through heating, that the glass in front of the lens does not become foggy.
AI: The "gold" metal clad power resistor is made by: -
S.I.R. SOCIETÀ ITALIANA RESISTOR, Via Isonzo 13, 21053 Castellanza (VA), Italy
The general data sheet that describes their metal clad products is HERE and that particular resistor is 68 Ω 25 watts: -
The other component looks like a ferrite core of some sort. The "15" might indicate it has an inductance of 15 μH.
Good luck.
|
H: Colpitts oscillator not functioning
simulate this circuit – Schematic created using CircuitLab
I don't understand how the circuit works .I know it is a Colpitts oscillator but the part where I get confused is that in the video on youtube it says we can set Ve = 1/2VCC however I don't understand how this can happen since we don't have a base resistor and the emitter resistor will try to draw as much current as the emitter current but this isn't possible because the voltage drop on the emitter resistor cant be more than VCC . I am not sure if this circuit can work based on that I wrote earlier.Can somebody explain it to me please?
AI: If R2 is the same value as R3 then they will project a DC voltage onto the base of about 50% of Vcc. The emitter will be about 0.7 volts below that. The emitter current will then be approximately half Vcc minus 0.7 volts divided by Re.
I know it is a Colpitts oscillator but the part where I get confused
is that in the video on youtube it says we can set Ve = 1/2VCC
By making R2 a little smaller than R3, the base voltage can be adjusted to be half Vcc plus 0.7 volts thus, the emitter voltage will be half Vcc.
|
H: Arduino Due ADC sampling
I am working on an Arduino board and I have no previous experience with it, unfortunately. I need to sense the voltage signal from a sensor using the ADC of the Arduino. I have an Arduino Due manual which says it has up to 12 useable pins for ADC input. I want to acquire 4 analog signals. However, I have two questions:
I want to fetch the data at a high frequency. So let's say if the sampling time of Due is 60 μs,
A. will it take all of 4 analog signals at T = 60 μs and then convert them to digital? Or
B. will it take the first signal at 60 μs and fetch the second analog signal from pin 2 at 120 μs and so on from 4th pin at 240 μs?
Which one of these is the correct scenario?
It says that the Arduino Due can take 1M samples per second. Of course, this is an ideal figure. But, will these 4 analog signals count as 1 sample of the Due or 4 samples?
AI: As far as I understand, this uC has single ADC harware with multiple channels (multiplexed inputs). This means that, for each conversion, it needs some acquisition time to charge its internal sample-and-hold capacitor and then some time to do actual conversion.
So, the ideal 1 M samples/second is for single channel only. If you use 4 channels, you can have 250 k samples/second max.
Sampling time is also specified for each channel. This device can't sample all the channels at the same time, it must sample them one by one.
|
H: Simplification of ideal op amp circuit
For an assignment at school I have an op amp circuit where I need to get the Transfer function of. I want to ask if my approach is correct. Because it feels I simplify it too much.(R1,R2 and R3 have the same value)
If I assume the op amp is ideal, I can say that V2 is equal to V+ because V+ and V- are equal and V- is V2.That would mean that I can simplify the circuit to this:
And then in turn to this:
And then I want to use the node voltage method to get the transfer function eventually. Is this simplification correct? Or am i making weird assumptions? (I dont know the answer so I cant check my eventual answer)
AI: I'm afraid the simplification is incorrect.
The reason for this is that the resistor \$R_3\$ creates an opportunity for the two nodes to influence each other. The original circuit only allows the output to influence the node between the two capacitances because the output of the opamp is an ideal voltage source (it forces current to flow via ground or a supply voltage). This is why buffers are typically used: to avoid the output influencing the input.
You will have no choice but to include an ideal (voltage-controlled) voltage source on the other side of \$R_3\$ that has the same voltage as \$V_2\$.
|
H: BAV99 forward voltage
I am reading the BAV99 datasheet from ON Semiconductor.
From the table on ELECTRICAL CHARACTERISTICS (page 2)
(TA = 25°C unless otherwise noted) (Each Diode)
Forward Voltage
mVdc
IF = 1 mAdc
715mV
IF = 10 mAdc
855mV
IF = 50 mAdc
1.0V
IF = 150 mAdc
1.25V
On page 3 there is the following VI curve graph:
Clearly at 25°C - 100mA is Vf = 0.9V.
There is huge difference between the table and the IV curve.
Which source of information I should trust?
Is this (typical) and (maximum) where the table guarantees the diode won't exceed Vf 1.0v at 50mA while the graph is just typical values?
AI: The table has the max values for the Vf with that current, it's saying it will not be more than that value. The graph is probably showing a the average value you'll see, but some parts will be above or below that.
As for which value you use, it makes sense to account for what would be worst case for your design, if you're concerned about power dissipation, then taking the max Vf for a given If makes sense.
|
H: Determining voltage by arc length
Ludic science light dimmer driver
I was watching this video made by ludic science and at 3:14 he said that at the maximum output of the high voltage ignition coil, the distance was 10mm/1cm which he claimed to be 10-12kv. Isn't the gold standard for the breakdown voltage of air 30kv/cm? Can someone explain?
AI: There is a big difference in dielectric Breakdown Voltage BDV for smooth parallel surfaces and point sources from sharp wire tips due to the E-field gradient force effects on ionization.
Yes 30kV/cm or 3kV/mm is true for smooth area surfaces.
But for wire tips it is 10kV/cm or 1kV/mm This also raises the resistance and this begins arcing with a lower current, which is less visible.
This of course varies with dust , humidity and pressure.
This BDV has nothing to do with the "holding current" gap which may be increased after conduction occurs.
|
H: Resistor circuit to correct impedance matching and attenuate signal by 10 decibels
I've got a Line Out on my amplifier with 100-600 ohm impedance (avg. 350). I have it connected to an effects pedal that has a 1M ohm input.
Right now the signal is clipping from the amplifier and I would like to attenuate that with a resistor only configuration, and simultaneously adjust the impedance each side sees. I'm looking for just 10dB of attenuation, or 1/3rd the volts.
Fix:
Amp needs to see 10K ohms on the other side instead of 1M ohms.
Pedal usually expects 1K ohms on the other side instead of 100-600 ohms (avg 350).
1/3 volts at the pedal inputs is desired attenuation.
All using unbalanced connections.
With this configuration below, the amp now sees 10.3k ohms and the pedal now sees 1k ohms. And the volts is 3.33V instead of 10.
Any problems that you see? I tried T and Pi configurations and they didn't work. By adding the resistor at the bottom it all seems to compute. Not sure if I did this right.
AI: The 50 K resistor (Reference Designators - ?) is in the GND connection between the source and the load. This is almost never a good idea in audio circuits. Also, because your path lengths are trivially short compared to the wavelengths involved, true impedance matching is not at all necessary. Better to go with a more simple L-pad, a two-resistor attenuator.
Why does the amp "need" to see a 10 K load a max value? Most amp outputs care only about the load impedance being too low, not too high. The line output from the amp probably is a resistor in series with the main output. Combined with the input impedance of whatever is connected to it, it creates an attenuator, but that is only an issue if the thing plugged in has a relatively low impedance, such as headphones. The main reason for the build-out resistance is to prevent a brief dead short across the amp output when something is plugged into the connector. Of course, this is all guesswork because we don't have any information about the amp.
I recommend a 10K series resistor and 4.7K shunt resistor between the amp and the pedal. This presents a load impedance that is greater than 10 times the amp's output impedance, and a source impedance that is less than 10 times the pedal's input impedance.
|
H: What is the purpose of the fenced-off area on this board?
I saw a recent video on YouTube where the presenter describes a commercial micro-current measurement product. What purpose does this tinned fence serve? I'm guessing its for noise suppression but I'd like to know more. Is it grounded? Since the parts inside the fence interface with parts outside the fence, do they all go through a single entry point inside the area?
AI: The holes in the board reduce electrical leakage, provide mechanical strain relief, and reduce thermal coupling and externally induced thermal gradients.
Overview
Let's start with leakage. Remember that when you measure small enough currents, PC boards are conductive - and any contaminants are even worse. Wherever material is removed, the conductance of that area is gone. This is no different than drilling holes in a metal foil to increase its resistance, or using a laser to remove conductive material when trimming on-chip resistors.
Thermal conductance from the island to the rest of the board is decreased by the holes as well. If the island has only low-dissipation components, then it will have much lower thermal gradients than it would had it been conducting heat between other parts of the board. This will decrease the variation in thermoelectric voltages across the island.
The holes also provide some mechanical strain relief, although this may not been a primary concern in this design, since other hole shapes could do an even better job. PC boards carry thermal strains developed by unequal heating or heat sinking across their area. In very sensitive DC circuits, such strains cause offset drift in various circuit elements. PCB traces are metal strain gages whether you want them to be or not - most circuits are not sensitive enough to measure it, but the effect is there, always.
In highly-sensitive and accurate circuits it's a death by a thousand cuts, and to get full performance out of the expensive parts (precision is never free), even minute and often unrecognized or neglected contributions to the result have to be identified and remedied.
Mechanical and Thermal Strain
Amplifiers and other ICs can have their offsets changed by mechanical strain on the pins, and this gets worse the smaller the package. Passive parts such as capacitors may have piezoelectric effects, and even at low frequencies the magnitude may be enough to be problematic. It becomes rather important when you're e.g. measuring voltages with 24+ bit resolution, and gets ridiculously critical past 30 bits of resolution. You can take a toothpick and press very lightly on the board opposite of the converter chip, and the readout will drift due to mechanical strain transmitted through the lead frame into the encapsulant and further to the die. Highest precision A/D converters have their packaging designed to minimize such strain coupling, but it's very hard to completely get rid of all of it.
Guard Ring
The exposed shiny trace that goes around the island circuit is a guard ring: its goal is to provide an equipotential "barrier" around the circuit; this implies that all currents that "cross" the guard shall not bypass it, otherwise the equipotentiality is not generally maintained.
The ring's conductance is much higher than the conductance of the board - say about 8+ orders of magnitude for a "garden variety" FR-4 board, and 10+ orders of magnitude for uncontaminated high-performance materials (where a fingerprint will usually cause painful loss of performance). Compared to the board material and soldermask, the guard may as well be "infinitely conductive", the potentials along the guard will be equal. This establishes a boundary condition that you can design for: all high-impedance parts of the circuit inside the guard essentially have high value resistors to this guard perimeter, which can be kept at a desired potential. This will decrease the differential conductive coupling between the circuit and the rest of the board.
The ideal is that no current shall bypass the guard ring.
The guard ring is purposefully not covered by soldermask. Even though the soldermask has "low" conductance, it doesn't have zero conductance, and barring its absence we wouldn't get the full performance the board is capable of. In all cases, some leakage current will flow through the bulk of the soldermask and thus will bypass the guard ring, decreasing its effectiveness.
Surface Leakage
In the (oft inevitable) presence of contamination, the surface of the soldermask will be much more conductive than the bulk of it, and those shunt paths will be insulated by the soldermask from the guard ring.
In extreme circumstances, there may be some contamination that chemically or physically alters the bulk of the soldermask and increases its conductance. The board material itself may well not be susceptible to such effects, especially if it's chosen with some foreknowledge of the operating environment (and an investigation of the compatibility of all materials used in the board manufacture with such environment).
Isolation of the Ring
The guard itself then may need to be further isolated from the rest of the circuitry as well. First, we'd isolate the power supply, since it obviously has to cross the guard ring and acts as a prime shunt path. The isolation components need to be designed for leakage low enough not to degrade the performance, i.e. they have to match or outperform the low-leakage performance of the PC board with the guard ring.
Further leakage isolation of the island is provided by the holes, since the bulk "average leakage" from the guard to the rest of the board can also affect performance. This depends on the circuit and application somewhat. After all, there is the leakage from the guard ring to the rest of the circuit - the equipotentiality along the ring doesn't change that. How much effect does that leakage have on the circuit function really depends on the application.
Shielding
A conductive shielding can will keep away the electrostatic fields and RF. It can be potentially mounted on the exposed part of the guard ring, although that's not always desirable.
A hermetic can prevents ingress of contaminants as well. Unfortunately, it also turns the traces underneath into inadvertent differential atmospheric gage pressure sensors, never mind adding mechanical strain to the board under such differential pressure loading. To minimize stress concentrations, the can should be circular or oval. The pressure sensitivity is mitigated by designing some bellows into the can. Either the can itself can be the bellows, in a no-expense-spared scenario, or a compliant membrane can be added as a "fake bottom" of the can: the top of the can then has a small vent hole, and a deformable but hermetic structure underneath seals the can - say, a circular diaphragm with a compliant "suspension" section.
As a side note: it's a good thing that the holes usually serve multiple beneficial purposes when high-impedance islands are desired: they decouple things thermally, electrically, and mechanically!
|
H: Primary voltage drop in current transformer
Following on from this question, Current transformer energy harvesting from a mid-voltage line, I've never seen any mention of primary voltage drop in a current transformer circuit.
simulate this circuit – Schematic created using CircuitLab
Figure 1. A typical CT application using a 5 A meter to measure the current through a load.
Since the burden resistance of the ammeter is reflected back onto the primary side by the inverse turns ratio squared there has to be a voltage drop on the primary side.
This raises a few questions.
Where does the voltage drop occur? Does it fade in and out with a peak right inside the CT?
Would we see a larger reading on VM2 compared with VM1 (10 mm and 5 mm from the plane of the CT, for example)?
What is the relationship with the dimensions of the CT?
What is the effect of the angle between the axis of the CT and that of the cable. (CT
AI: Would we see a larger reading on VM2 compared with VM1 (10 mm and 5 mm from the plane of the CT, for example)?
Unlikely. Each volt-meter together with its leads and corresponding section of the wire passing through the core form a loop. That loop is cut by the varying magnetic field inside the core of the CT. Thus, they form a one loop secondary of the transformer, and will see a voltage almost identical to the voltage drop of the primary. Almost all of the magnetic field is confined to the core, so the area of the loop will have little effect, although theoretically, if the loop for vm2 encloses that for vm1, it might encircle more magnetic lines of force.
Where does the voltage drop occur? Does it fade in and out with a peak right inside the CT?
That is problematic, see further down the question. You could probe a very small section of the wire in the CT core, and depending upon whether the test probes enter the core from either side, or whether they enter from the same side you will get a different answer. If the test probes enter from opposite sides, you will get a voltage approximately equal to the full voltage drop of the CT. In fact you could touch these probes together and still get the full voltage drop, because the probes are forming a 1 turn secondary. However, if the probes enter the CT from the same side, you will get almost 0 voltage. What you do get will be mostly the resistive drop.
The question you ask poses a measurement conundrum. The circuit consisting of either voltmeter, its leads, and the conductor through the CT core is "cut" by the changing flux of the CT core, in effect making a transformer secondary.
From one point of view, the voltage between the contact points of the voltmeter leads is undefined. To explain that point of view, consider Maxwell's(*) third equation, i.e. Faraday's Law.
Faraday's Law states:
$$\nabla \times E = -\frac{1}{c}\frac{\partial B}{\partial t}$$
If there is no time-varying magnetic field, this implies that
$$\nabla \times E = 0$$
which means that E is a conservative field. Being a conservative field implies that E is the gradient of a scalar function, (which we call V)
$$E = \nabla V$$
V, being a scalar function implies that the sum of changes around a loop must be 0.
However, the assumption that there exists no time-varying magnetic field is violated in the circuit consisting of the voltmeters, their probe wires, and the mains conductor. Accordingly, MIT Professor Lewin has argued that the voltage between the two lead points of the Volt-meter is undefined. In a class demonstration, Lewin shows two voltmeters connected to the same test points showing different voltages. Lewin's position is further explained in this blog.
(Lewin's demonstration has is discussed here, but I am not satisfied with the explanation given.)
Others, such as Electroboom/Mehdi have argued (see also here) that the problem with the example Lewin provides lies in measurement technique. Volt-meter probe wires should not arranged to form a loop cut by a changing magnetic field. If the probe wires are re-arranged, consistent voltage measurements are observed in Lewin's experiment. If we take that point of view, however, and re-arrange the test leads to avoid loops cut by changing magnetic fields in order to get a consistent/proper voltage measurement, then one of the test leads needs to pass through the CT core. This, however, will result in a voltage drop consisting only of the conductor resistance.
Hence, the conundrum. How should the test leads be arranged? If they form a loop cut by a changing magnetic field, is the voltage well defined? If they are not cut by that changing magnetic field, where is the voltage drop created by the CT?
*The equations called "Maxwell's Equations" were first formulated in their modern form by the electrical engineer Oliver Heaviside.
What is the relationship with the dimensions of the CT?
Very little, provided the core is not saturated. If the core is not saturated, most of the magnetic lines of force will be confined to the core, and any wire loop around those lines of force will have approximately the same induced emf.
What is the effect of the angle between the axis of the CT and that of the cable. (CT)
Almost none, for the reasons explained above.
|
H: Inability to forward bias diode in a simple circuit
I am struggling with forward biasing a diode in the circuit in the attached.
All the wires are attached properly, as there is continuity. The diode drops only 0.400V with a 9 V battery connected, the resistor being 330 ohms. I cannot understand why the diode would not drop the amount required to forward bias itself and as a result allow current through the LED. At the moment most of the voltage is being dropped across the resistor and LED. Am I making some fundamental mistake I cannot see? Of course I tried connecting both the diode and LED in reversed polarity with no change.
Apologies for not posting more pictures. I have RSI and need to limit how much I use the computer.
AI: Diode is not the problem. You can remove it and replace it with a piece of wire to see that for yourself. Make a simpler circuit work first. Then when you add the diode it'll keep on working (as long as there's enough voltage driving the circuit).
You assumed that the diode is a problem, whereas before such assumption you should ask yourself: do you know enough to know that the diode is a problem? Remember that the voltage drop across a diode is ~logarithm of the current, so all you're seeing here is a diode acting like a rather sensitive current meter and telling you that microamps are flowing through it. You can replace the diode with a microammeter and see what the current is. I'd expect anywhere between 0.1uA and 10uA.
Once you get the circuit working, you should play with that diode - put it in series with a resistor, measure voltage across the resistor to determine the current, then measure the voltage across the diode. Use various resistors to "sweep" the current across the measurement range. Plot the diode voltage vs. current over at least 4 order of magnitude - say between 0.1mA and 100mA, and see whether the response is logarithmic (after all, I can be talking nonsense - you really should see it for yourself since it's easy and thus no reason at all to blindly trust me).
|
H: Analogue switch TS3A475 does the direction matter?
Does the "direction" of the signal matter?
signal is 1/3Vcc to 2/3Vcc
AI: No, it doesn't. But you need to make sure that the voltage is within the range specified in the datasheet (usually it must be between the supply rails of the switch), and also the switch has a non-zero resistance and a maximum allowed current through it. The switch resistance will be highly dependent on both temperature and voltage, so have to make sure that such variations won't unacceptably affect the performance of your entire system.
|
H: Switchable AC-DC output
So today I want to make it a little bit strange...
I'm working on a project and there are 4 outputs controlled by 4 relays and the relays' outputs are coming out of the board with phoenix so the user can use them like a switch.
But I'm thinking of putting it in a higher level so the user can choose what every output should be, like this output I want it to be controllable +5V DC (the output set as +5V DC and can be switched on and off with relay) or I want it to be controllable 220V AC so that he can switch on or off this 220V AV output. Notice within just one controllable output we can decide whether it's +5V DC or 220V AC (with a microcontroller).
I'm thinking of using TRIAC.
I don't know if I made my point clear or not but I need your help, if you can help me please don't hesitate and if you didn't get my point tell me which part is not understood so that I make it more clear.
AI: You cannot use the same connector for 5V and 220V -- not without killing someone. So just forget about this idea.
I'm thinking of using TRIAC.
Why do you think about such details before thinking of simple things first, like whether you should even be doing it at all...
|
H: 24V with IRF3708
I've a strange issue. I wired everything up like that:
The ATTiny13 changes, for test purposes, every 10 seconds between HIGH and LOW. Since the IRF3708 needs 2.8V to switch state, I use the 5V "power line" with a transistor to get over the threshold.
FYI: This is not my whole project; but it must work as a single component. After all, some may think "he doesn't need this part at all" - I do :)
This setup works - even the 10s cycle etc. But when I measure between source of IRF3708 and GND - there is only 3V on my multimeter.
When I measure between Drain and GND, it is my 24V. So it seems that the IRF3708 is "swallowing" 21V while running through it.
I hope the problem is clear and you can help me.
Thanks a lot already!
AI: That's a N type FET. It can't be used for high side switching. Or it can, but you'd need gate higher than supply then. Either change to P type FET, or better yet, simply use low side switching.
|
H: Cleanest way to solar-power a string of outdoor bistro lights
I'm trying to hang some bistro lights on my balcony — specifically, the SVARTRÅ outdoor bistro lights from IKEA.
There's just one issue: the balcony has no power outlets, and I'm not going to snake a cable out my balcony door, so I'm trying to find a way to battery-power or solar-power these lights.
The lights take 5 V DC according to their power supply, and my kill-a-watt shows they draw 0.025 A at 120 V from mains when powered. I know enough about electronics to know I could power the bulbs by splicing in a D battery holder with 4 batteries in series, but I'd like to solar-power the lights if possible. My questions are:
Even though the power supply is rated for 5 V, would it be safe to give the bulbs 6 V? It would make it easier to test my setups with them without worrying about blowing them.
What combination of a solar panel/battery could I use that would be relatively clean while also giving the bulbs enough power to be turned on all night if desired?
EDIT: Some more info about the lights from a relevant FCC report on the power supply:
LED driver: KMUV-050-060-NA-2
Rating
Luminaire: 5 V DC, 3 W
LED driver input: 100 – 120 V, 50/60 Hz, 0.09 A
LED driver output: 5 V DC, 1.2 A, 6 W
Class
Luminaire: III
LED driver: II
AI: We can guess the 12 LED lamps are wired in parallel, and each has its own individual dropping resistor or driver circuit. (Because if they were wired in series the power supply voltage would have to be at least 12 times the individual LED forward voltages, probably 15V or more total).
I'd say you just need a 5V supply that can supply at least 1.2 A.
I'd suggest the easiest solar power solution is a "solar power bank" with 1 or more USB receptacle(s). You will need to cut the wire to your lights and splice a USB plug. When you make the cut, take care to test and label the wires to determine which is positive and negative, and match that to the USB voltage.
This search term Amazon solar power bank will find plenty, ranging from $25 up and supplying 2A or more. To determine if they can run all night, check the battery capacity. For example, a 20000 mAh battery is 20 ampere-hours, so it could supply 2 A for 10 hours, maybe not enough to run all night in winter. You also need to have plenty of sunlight all day to charge it. Finally, be aware these units will wear out from repeated charge/discharge cycles. Bottom line: reconsider whether you could find a way to run a wire through your door, window, or wall.
|
H: Silicon-free amplifier
I'm working on building a silicon free computer and I need a way to store information after power off. Right now, solid core memory seems like the best option. However, the sense line of solid core memory produces a very small pulse of electricity when activated, and I need a way to boost that current enough to flip a relay (~3v,.7A).
TLDR; how can I boost a very small amount of voltage without using an IC or transistor?
AI: TL;DR: Any sort of a relay computer with integrated stored program memory (not paper tape) is a small-scale production run no matter how much one wishes it to be just a "one off" experiment. When a circuit, even a simple one, is copied hundreds of times, it needs to be engineered to not only be reliable enough that you won't be tweaking/fixing it constantly, but also so that you won't hate yourself after few hours of assembly. Iteration from smaller to larger "repeat counts" is critical. Prototype everything, and prefer to scale up progressively rather than all at once if you can. Design for manufacturing for a "hobby project" - who'd have thought, right?
I'm working on building a silicon free computer and I need a way to store information after power off
I'll interpret silicon-free as semiconductor-free, and focus on that line of pursuit in this answer. Others can surely cover non-silicon semiconductors. I'm also discounting heated vacuum tubes, since they are a royal pain unless you find a rich source of low-power ones used for portable radios. Otherwise you'll be dissipating hundreds of watts in just the row and column amplifiers.
We have two types of memory relevant to such a project: PROM and RAM. First, let's see what storage elements we can use. For PROM, any sort of small switch will do - e.g. dip switches of various forms, or even solid wires plugged into suitable receptacles (think of a solderless breadboard, but with just one contact per signal, instead of a whole column). For RAM, bistable relays work well, although not for your pocket (I'm working on such a computer too, and have settled on relays - many kilograms of them...).
You could also have ROM, i.e. the sort of a memory that can't really be reprogrammed easily after it has been manufactured, but then I'd suggest first emulating it with a semiconductor-based model (e.g. an Arduino coupled to some relays and input voltage dividers), and only committing to permanently soldering-in the program when the whole thing works. That way you can skip the potentially more expensive/cumbersome PROM, assuming that you want a fixed-function machine, or are OK with swapping ROM cards to change programs.
Next, let's see how to couple those storage elements to the data lines (e.g. D0-D7 on some bus). With semiconductors allowed, you'd use a diode matrix - cheap and effective. Without semiconductors, you have to use something else: vacuum tubes (good luck having thousands of those), threshold devices like gas discharge tubes or indicators (neons), or normally open relay contacts.
If you settle on relays for coupling elements, anything that can run some sensibly sized programs (say a couple kilobits of combined RAM and PROM) will consume thousands of relays, and has to be designed very carefully to keep heat dissipation in check: large, dense memories with lots of relays switched on at the same time will self-destruct without good cooling. In fact, you may well find out (as I did) that using non-latching relays for even "dynamic" memories is infeasible, and that to scale things you can only afford a few latching relays switching state at any one time, vs. a whole lot of astable relays keeping their coils energized to maintain their state. With bistable relays, permanence is free (as it may be with cores, except that cores need more support circuitry so it's not so "free" anymore, and you can hardly afford to use them for everything).
As a rule of thumb, small relays that are offered in SMT packages are in the 100mW-200mW coil dissipation range, so it doesn't take that many of them to get the equivalent of a 60W light bulb tightly surrounded by lots of not-very-thermally-conductive densely populated cards just waiting for trouble. 0.5k relays will be a tiny computer with no stored program and just a few registers, or a single-function device like e.g. a square root circuit. It really helps when you don't need to turn on the majority of them at any one time.
How can I boost a very small amount of voltage without using an IC or transistor
E.g. using a transformer. But that way you're boosting voltage only, not power. So the higher voltage is not the entire solution. You then need something else that can convert this low-power high-impedance high voltage signal to a much higher power, lower impedance signal you can use to drive things. Some sort of a device that can conduct relatively large currents but can be triggered by a low power high voltage pulse is needed - various gas discharge tubes can do it (I'm ignoring vacuum tubes here). Depending on how much energy you need, it may turn out that a neon bulb pre-biased close to ignition voltage may do the job. You can then use another neon bulb as a switch, so that the first neon is disconnected from the low impedance nodes until its voltage drops from bias to conduction. Then the other neon switch will connect the now low-impedance path to e.g. a relay coil or a pulse transformer. But such conceptually simple circuits may work well when you have just one on a breadboard, but may prove impractical when you need dozens or hundreds of them to perform reliably, since each will require either individual tweaking or selection of individual components based on parametric testing. Another big mistake is running neons or other discharge tubes exposed to light: after tweaking the circuit, it'll tend not to work as soon as you cover it up inside the enclosure, so it's best to experiment with those well shielded from light, if you wanted to experiment.
If the memory cores are large enough, then you could amplify the sense pulses with saturable inductors, but you'll need a "high frequency" AC source of some sort. A saturable inductor's inductance drops a whole lot when input signal is present, so an AC voltage is normally blocked by the inductor, but not when the core is saturated. I've experimented with a stepper motor driven by a cheap cordless drill as an AC source - in the end, I decided to do something else, but it's not a non-viable idea by any means. It just didn't fit into my design criteria.
There's a whole other design criterion that in hindsight is obvious enough, but can catch you unaware: parameter spread, both intrinsic (device specs) and extrinsic (operating environment). Things get real annoying when the whole design has only a narrow voltage or temperature range where it'll work, never mind a narrow range of some critical parameter of a group of components. This works great on paper but just doesn't scale at all. You can deal with tweaking a hundred things if you are dedicated enough, but tweaking a thousand things that then get inserted into a chassis and don't work anymore because they are 40C warmer than they were on the bench can be a bit of a setback. So try to stay clear from any circuits that require tight tolerances on anything. Ideally, you'd want the thing to not care about +/-25% operating voltage changes, and 50-80C in-system operating temperature range. Squeezing thousands of electromechanical components into a reasonably sized enclosure will have some components (e.g. those in the geometric middle of it all) running quite a bit hotter than those on the cards near the enclosure surface.
And you don't get any voltage regulation without semiconductors or vacuum tubes unless you use a saturable transformer as a regulator, but even then it's a risky road to take. Any optimistic decisions made are very hard to rework when even a minor component change (e.g. a different 2-terminal passive part in some coupling or snubber network) implies reworking hundreds or thousands of parts, with some potential for collateral damage along the way.
And also: fuse sub-modules individually on the supply lines, and if the busses aren't inherently conflict-proof (i.e. "open collector" etc.) then fuse each contact driving the such a bus line. A push-pull bus line with just one stuck contact will happily destroy all the other contacts that attempt to drive some other state onto it, in rather short order. Fast, non-resettable fuses are critical for such bus drivers. Push-pull signals can be good in reducing complexity sometimes, but I abandoned them early enough not to suffer too badly. It was a clever trick that proved not to be too clever, although I'm sure someone cleverer could get it to work well at scale - I just didn't have the time.
Plenty of relay computers do use semiconductor memory and don't have a big problem with bus conflicts, since the memory subsystem couples to the bus at one point only. But if you use multiple cards with switches/latching relays/cores, kilobits of memory will take dozens of large cards, each with bus drivers. Many failure modes will result in bus conflicts - say you got a stuck contact in address decoder, and thus bus conflicts must be either inherently safe, or at least well protected from if they can't be made safe (thus fuses).
|
H: Explanation for drift current
My question is why the concept of drift current or movement of minority charges is necessary while the pn-junction is formed. So drift current was explained to me in more or less this way
As the holes diffuse from p to n side due to concentration gradient, it leaves behind an ionised acceptor
which is immobile. Similarly when electrons diffuse from n to p side they leave ionised donors behind which too are immobile. As this continues the formation of the depletion region takes place, and due to the immobile charges an electric field sets in in the depletion region and this field leads to motion of charge carriers ie. the flow of electron from p to n and holes from n to p, this flow is regarded as drift current.
So let us assume there are two forces one that leads to diffusion of charge carriers which is due to the concentration gradient and other that arises due to the field generated, then why don't the diffusion simply stops when these forces become equal? Why do we need to introduce drift current?
Where did I go wrong or is there something I am missing? I've read answers on similar question that the diffusion is dominating but couldn't correlate with it my question if it is. Does this play a role somehow?
AI: The world of electrons and holes is a very wiggly and jiggly world. Any thermal energy causes them to vibrate, impact each other and nucleuses and move pretty erratically (this is called the Brownian motion).
It doesn't really make sense at this point to talk about "single electron" movement, and instead we devised the models of drift and diffusion that describe their average behavior:
Diffusion describes that on average electrons and holes will tend to spread out.
Drift describes that on average electrons and holes will favor a direction when an electric field is described (probably not all of them, all the time, but most of them most of the time).
Diffusion does not mean that no two electrons can't stay together, just that it is very unlikely. Similarly, drift does not mean that electrons can't stay where they are momentarily, just that on average that is very unlikely.
So when you write, talk or read about drift and difussion, you should always keep in the back of your head that it is just an averaged behavior of a lot of holes and electrons. If, on average the electron and hole concentration stays constant throughout the device (but remember, individual electrons and holes will still be moving around erratically), then this is called thermal equilibrium. In a PN junction, this happens when the averaged effect of drift and averaged effect of diffusion balances out.
So try not to think about drift and diffusion as a nice single-minded stream of electrons or holes all moving in a single direction, and everything will likely make much more sense.
|
H: Basic Questions About a Simple Circuit (GPIO-Controlled Relay on RaspberryPi)
I want to control an 8-port Sainsmart relay from a Raspberry Pi (via the GPIO pins) and I'm trying to understand a couple of things about the suggested circuit for integrating the two. Here is the circuit:
As the explanatory text says, the relay is "active low" but, for a number of reasons, I want to control it from the RPi as an "active high" device (i.e. raise a GPIO pin to high to activate the relay, and lower it to zero to deactivate it).
I have it working and I sort of understand what it does, but I have a couple of questions:
How is the sizing of the resistor(s) determined?
What is the purpose of the 10K resistor?
This design implies to me that the RPi and the relay must share a common ground. Is this true? Does it mean they must also share the same power supply?
AI: As the explanatory text says, the relay is "active low" but, for a number of reasons, I want to control it from the RPi as an "active high" device (i.e. raise a GPIO pin to high to activate the relay, and lower it to zero to deactivate it).
The accompanying text is explaining to you that the GPIO high (3.3 V) will turn on the transistor connecting the collector to the emitter thus pulling the relay low. This gives your desired behaviour.
How is the sizing of the resistor(s) determined?
The transistor base-emitter junction behaves rather like a diode and will have a 0.7 V drop across it when current is flowing through it. That leaves 3.3 - 0.7 = 2.6 V across the base resistor.
The 2.2 kΩ base resistor is chosen to give enough base current to turn the transistor on in "saturation". i.e. The collector-emitter voltage drop has gone as low as it can. In the example the base current will be given by \$ I = \frac V R = \frac {2.6}{2k2} = 1.2 \ \text {mA} \$. If the transistor has a high current gain this may be enough.
What is the purpose of the 10K resistor?
It helps keep the transistor off when your microcontroller is powering up and the GPIO isn't in output mode because the program hasn't booted yet.
This design implies to me that the RPi and the relay must share a common ground. Is this true?
Correct.
Does it mean they must also share the same power supply?
No. The relays can and often are powered from a higher voltage supply. This is one of the reasons the open-collector design is so popular as it provides a means of doing voltage translation between circuits.
|
H: Why is the output of a high-pass filter not 0 when the input is 0?
I'm trying to design a 5th order Butterworth high-pass filter built on the Sallen-key topology with a cut-off frequency at 100Hz. For some reason, the output has a DC offset of around 5mV when the frequency of the input is less than the cut-off frequency even when the input is grounded.
I am using the Op-Amp LT1498 on LTspice and all the capacitor values through all three stages (first order, second order, second order) are 100nF. I'm guessing that its from the output of the op-amp but I'm not really sure.
Link to LT1498 Datasheet: https://www.analog.com/media/en/technical-documentation/data-sheets/14989fg.pdf
The output is as follows from a 10Hz input:
Any help would be appreciated!
AI: This could arise as the sum of the input offset voltages ... but for 3 opamps, that would only come to 3.9mV worst case, so that isn't it, especially since it's unlikely that the SPICE model has the worst case offset voltage.
But the opamp has a pretty large input bias current ... 0.65 uA.
Have you balanced the impedances on both inputs to the opamp?
If you fed one input from 10K source impedance and connected the other input to GND, (or fed it from a 0 ohm source) that input bias current would develop 6.5mV on one input, and 0 on the other.
The classic solution is to feed the other input from the same impedance, 10kilohms, so that both inputs are offset by the same voltage, which then cancels out.
simulate this circuit – Schematic created using CircuitLab
Easy to test this in your simulation.
(There is typically some mismatch between the bias currents on each input : this is called input offset current, and it's 10% or 65 nA for this opamp. So don't expect perfect cancellation, but you should be able to get much better than 5 mV)
The other approach, of course, (if the filter is driving a suitably high impedance load) is to implement the 1st order section as a passive RC filter on the output of a 4th order active filter. Then the C means the internal DC offsets don't matter...
|
H: Help identifying MOSFET manufacturer
I have a MOSFET that I'm trying to identify the manufacturer of. I know it's an IRF520N, but it doesn't behave the same as an IOR IRF520N, so I'd like to find the actual manufacturer's data sheet.
The unknown one is the one on the left.
AI: The manufacturer of the left one is a VBsemi Electronics. Link of the datasheet: https://www.alldatasheet.es/datasheet-pdf/pdf/1223698/VBSEMI/IRF520NPBF.html
|
H: OpAmp/Comparator for AC current detector circuit
I am trying to complete schematic for current detector. This is addition on How to build AC current sensor circuit for ESP-01 GPIO.
By googling and researching I came with following schematic which I try to simulate in LTSpice:
For a simplicity I used V1 which is voltage representation of a current that will be flown through current transformer i.e. burden resistor.
LM358 should work as comparator, while D1/C2/R4 are used as peak detector to maintain high enough voltage for mosfet to turn on when current flow is present.
But as you can see on transient response above, LM358 obviously is not a good choice for this task.
Output from LM358 provides significant voltage drop and it ends on around 1.7-1.8V which is too low for mosfet to reliable conducts.
As far I understand some explanations on the net, LM358 is not rail to rail op amp, on my surprise it really behaves according to this model and doesn't fit for this purpose.
Seems like LM324 behaves in a same way.
I try to use few comparators as an alternative, for example LM393.
Problem I found with comparators is that some of them requires negative voltage source V- along with V+ - they didn't work by grounding V-.
Second problem I found out is that some of them requires external pull-up resistor at the output to work properly. But this can't work properly for this circuit, at least I don't know how to do it.
Once I finally found one comparator that fits this purpose - TLV3501 I noticed that my local electronic shop doesn't have it. Alternative, LT1720 that worked in simulation also is not available for me. Some of them are expensive for simple design like this.
Do I need to change the circuit or get a better opamp? Do I need a reason to rail op amp?
Input voltage should be 50Hz sine, with amplitude not lower that 100mV.The only restriction for this component that I can see is that should be rail to rail i.e. output voltage close to V+.
AI: Try biasing the DC operating point in the center of the comparators range with a scheme like this and use a rail to rail opamp. Make sure the votlage from C1 can't exceed the voltage of the opamp.
simulate this circuit – Schematic created using CircuitLab
|
H: Trying to understand how this PNP works
I am new to electronics and still learning so please don’t be too hard on me
Here is the circuit I am trying to understand:
To control to top PNP transistor, there is an optocoupler and 2 resistors.
When the opto is off, the base of the PNP is in series with the 10k resistor. I guess that the voltage on the base would be lower than the source voltage as the resistor will have a voltage drop.
If I understand correctly, the PNP will be on if the base has 0.7V less than the voltage on the emitter.
When the opto is on, then the base is in a voltage divider (in between the 10k and the 1k resistors) so then again, the voltage on the base should be less than the one on the emitter.
How can this PNP be turned off?
AI: When the opto is off, the base of the PNP is in series with the 10k resistor. I guess that the voltage on the base would be lower then the source voltage as the resistor will have a voltage drop.
The idea is that the resistor pulls the base up to the emitter's voltage to prevent it turning on due to leakage current. The 10k "bleeds away" any leakage current.
If I understand correctly, the PNP will be on if the base has 0.7 V less then the voltage on the emitter.
Correct.
But when the opto is on, then the base is in a voltage divider (in between the 10k and the 1k resistors) so then again, the voltage on the base should be less then the one on the emitter.
That's correct and the base-emitter junction voltage will be about 0.7 V. (It behaves like a diode.)
How can this PNP be turned off?
Easy. Turn off the opto-isolator. Now no current will flow through the 1k resistor, the 10k resistor will pull the base up to the emitter's voltage, the base will not be forward biased and the transistor will turn off.
Could you try explaining the « the resistor pulls the base up to the emitter's voltage » in other words.
Due to thermal effects the transistor will leak a little current between the emitter and the collector. This means it is not fully "off". By connecting the base to the emitter we can stop this - but then we would need another switch in that connection to allow the transistor to work normally. Instead we connect the base to the emitter with a resistor. The resistor is a high enough value not to affect the rest of the circuit (your 1k is much stronger due to its low resistance) but it's low enough that any charge on the base is bled back to the positive supply and the transistor is turned fully off.
This might be easier to understand for the NPN case because we are more used to referencing everything from the ground rail.
For a specification, have a look at page 2 of the 2N2222 datasheet. You'll see the current is tiny, even at 50 V VCB.
|
H: TMC2130 does not work properly
Great title, I know.
I have a driver board connected to a really small stepper motor. When I run it in step / dir mode it works.
But as soon as I connect either SCLK or MOSI the motor turns at double the rate, TMCs CS is low it turns in full speed and when CS is high the motor acts completely up, turns into the end stops, changes direction half way...
The serial output is always 1, I guess the chip simply does not send any data. With a pull-up I get all ones and when removing it I get either 0 or random nonsense.
I can't for the life of me figure out anything. It simply behaves completely weird and I have no idea whatsoever, why that thing simply does not work with the most basic "hello world" of motion controllers.
Thing I don't get. Why does the motor turn faster when connecting something? Why does the chip don't output any data at all? Why does it turn back and forth wildly?
My connections are all correct, I don't have open signal lines, my connections are not flaky and the power supply is perfectly fine.
I run the show on a Teensy 2.0++ with Arduino, since I want to understand the driver before working with a real language. The Library is TMC2130Stepper
TMC2130 Datasheet
SilentStepStick Datasheet
There is no schematic. It's simply the step interface, power supply and the ISP interface...
Arduino Code
#include <TMC2130Stepper.h>
#define PIN_LED PIN_D6
#define PIN_STP PIN_F0
#define PIN_DIR PIN_F1
#define PIN_EN PIN_B5
#define PIN_CS PIN_B4
TMC2130Stepper stepper = TMC2130Stepper(PIN_EN, PIN_DIR, PIN_STP, PIN_CS);
bool dir = true;
void setup() {
pinMode(PIN_STP, OUTPUT);
pinMode(PIN_DIR, OUTPUT);
pinMode(PIN_EN, OUTPUT);
pinMode(PIN_CS, OUTPUT);
pinMode(MISO, INPUT);
Serial.begin(115200);
while(!Serial);
stepper.begin();
stepper.SilentStepStick2130(20);
stepper.stealthChop(1);
digitalWrite(PIN_LED, HIGH);
}
void loop() {
digitalWrite(PIN_STP, HIGH);
delayMicroseconds(200);
digitalWrite(PIN_STP, LOW);
delayMicroseconds(200);
uint32_t ms = millis();
static uint32_t last_time = 0;
if ((ms - last_time) > 2000) {
if (dir) {
Serial.println("Dir -> 0");
//stepper.shaft_dir(0);
digitalWrite(PIN_DIR, dir);
} else {
Serial.println("Dir -> 1");
//stepper.shaft_dir(1);
digitalWrite(PIN_DIR, dir);
}
dir = !dir;
Serial.println(stepper.GCONF(), BIN);
last_time = ms;
}
}
I'm not the biggest fan of imgur, but it's not too important, just some additional view.
Without SPI, works great
With SPI, obviously does not work
AI: When things don't make sense, then it's time to RTFM. According to the TMC2130 datasheet, the spi pins (and others) are configuration inputs.
Section 24 of the datasheet explains.
To select edit[SPI] standalone mode, you need to ground the SPI_MODE pin.
|
H: Deducing the equation of the circuit which is composed of batteries in parallel and the external resistor
We assume that the \$n\$ batteries have been connected in parallel, and the external resistor connecting the endpoints(terminals) of the parallel part.
\$n:=\$ number of the batteries.
\$r_i:=\$ith internal resistance of the ith battery.
\$E_i:=\$ith EMF of the ith battery.
\$I_i:=\$ith current which flows between the endpoints of the ith battery.
\$I:=\sum_{i}{I_i}\$ ;(The current which flows through the external resistor).
\$R:=\$external resitance.
My textbooks says that below equation is true.
\$R*I=r_i*I_i+E_i\$
I thought that the above equation is wrong and my below equation is true as applying Kirchhoff's law(potential drops).
\$R*I+r_i*I_i=E_i\$
Can anyone tell me what I have been missing or mistaking so that I can resolve the problem on my own.
AI: What you're missing is the sign convention.
You have written the ith current down as simply which flows between the endpoints of the ith battery.Sloppy thinking like this is bound to get you into trouble.
simulate this circuit – Schematic created using CircuitLab
If we define positive current as that flowing down the page, then we get your textbook's equation.
If we define positive current as that flowing from the positive terminal of each battery (admittedly the more natural way to view it), then we get your equation.
Which is correct? Both are, once you've drawn a diagram or otherwise specified your current convention for each. The textbook's convention has the advantage that it's more self consistent. If you draw a normally styled schematic, then all current arrows point down.
People starting out in this type of analysis often get stressed over the current direction. A common question is 'but what if I choose the wrong direction for the current in this branch when I'm placing my +ve current definition arrow?' The answer is it doesn't matter. When you do the sums, the current will come out with whatever sign is correct for your arrow. If it's negative, then it's flowing the other way to your arrow.
It's for this reason the textbook uses the convention it does. Label the currents consistently, so that you don't have to think when placing the arrows. Consistency means that you're less likely to make an error when creating the Kirchoff loop equations.
|
H: How do I find the -3dB point on a given graph
I just performed an AC Simulation on QUCS and was asked to find values of gain and phase at low freq. and high freq and at the -3dB frequency point with the roll-off slope. The circuit is an RC circuit as shown below (image was taken from QUCS itself)
My questions are:
When I'm meant to mark low freq. and high freq., do I just mark 1e03 and 1e09? (plots shown below)
What point is the -3dB freq?
How do I get the roll-off slope - do I just calculate the gradient at the -3dB point or do I do something else?
TIA
AI: I just performed an AC Simulation on QUCS and was asked to find values
of gain and phase at low freq. and high freq and at the -3dB frequency
point with the roll-off slope.
I think you can see that at DC the gain is unity and the phase angle is 0 °.
At high frequency, it's really asking you what the gain is at infinite frequency and that has to be zero but, the phase is clearly -90 °.
When I'm meant to mark low freq. and high freq., do I just mark 1e03
and 1e09?
You shouldn't limit yourself to what the bode plot phase response is limited to. In other words, think outside the box a little.
What point is the -3dB freq?
By definition, the -3dB frequency is when the output power is half the input power.
In decibels that's \$10\log_{10}(0.5)\$ = -3.0103 dB or "-3 dB" for shorthand.
And, half power is when the output voltage has dropped to 0.707107 compared to the input voltage. This is because \$\color{red}{20}\log_{10}(0.707107)\$ = -3.0103 dB.
Note I made the "20" in red to signify the difference when calculating voltage decibels.
And, the "-3 dB" point happens when R = \$X_C\$. Hence: -
$$R = \dfrac{1}{2\pi f C} \hspace{1cm}\text{or}\hspace{1cm} f = \dfrac{1}{2\pi R C}$$
How do I get the roll-off slope - do I just calculate the gradient at
the -3dB point or do I do something else?
The roll-off slope is 20 dB/decade for a single order low pass filter. What does this mean you might ask? It basically means that at a frequency well above the -3 dB point, if frequency rises by (say) ten times, the output amplitude drops by 10 times. Dropping by ten times is a reduction of 20 dB hence, the slope is 20 dB per decade.
You could also say that if the frequency doubles then the amplitude halves and this would imply 6.0205 dB per octave (or 6 dB per octave for shorthand). In other words, we say that this is approximately true: -
6 dB/octave = 20 dB/decade.
The frequency of 1.5915 MHz comes from the R and C used in the question and the formula higher up in this answer.
|
H: Interpolation method for downsampling smartphone sensor data
I have a dataset containing lot of sensor measurements on Android smartphones, namely accelerometer, gyroscope and light sensor. The light sensor was sampled at a rate of 4 Hz and the accelerometer and gyroscopes were sampled with up to 500 Hz (depending on the device).
Now, I would like to downsample the accelerometer and gyroscope recording to 100 Hz and upsample the light recordings to 100 Hz.
Is cubic interpolation for upsampling the accelerometer and gyroscope ok or is it better to use linear interpolation? And what interpolation is typically used for a light sensor?
AI: There are several up and down-sampling techniques available. The choice between them can be made on how the data was originally sampled and what you want to do with the data.
As you are clearly on the learning curve for this sort of thing, it's worth implementing a simple linear interpolator for upsampling, a simple box-car (add N together) filter for down-sampling, and getting your whole application going end-to-end. Any time I've done a full integration experiment even with known poor components, I've learnt something I didn't expect to learn. It might even be good enough for what you want. This will get you some experience, and if the performance is falling short of what you want, some incentive and vocabulary to investigate the better methods of doing it. Cubic interpolation may appear attractive on first appearance, but it's only a bit better than linear, and it's a lot worse than the better methods.
If the data was properly band-limited when it came in, and you want to process your resulting data to an appreciable fraction of the Nyquist bandwidth, then you need to (for your use case) and it's worth doing (as the data is good) band-limited interpolation or down-sampling. This is usually done with a properly designed frequency domain FIR filter. You design the filter based on your requirements for the data. Look up Parks-McClellan or Remez design techniques.
If the data was not properly bandlimited, then you are not going to gain all that much from careful interpolation when upsampling, so you might as well use linear interpolation, or cubic if the absence of sharp changes of slope appeals to you.
When downsampling poor data, you could add together N input samples for every output sample. This is also known as a 'box-car' filter, from the shape of the impulse response. Even if N is large, it can still be done with two adds per output sample if you implement it as an IIR filter. It is a special case (the first order case) of the so-called Hogenhaur or CIC (Cascaded Integrator Comb) filter, which can efficiently do higher order filtering. This filter is commonly used as the first stage of the large rate downsampler used in delta sigma ADCs, as it is very efficient, and has 'good enough' frequency response for the first stage. If you are going to use your data only to 10% or so of Nyquist and can tolerate some response rolloff, then this filter can do the whole job.
The CIC structure can also be used to do efficient linear and higher order interpolation if you take great care of the initialisation conditions.
|
H: Order of operations and rounding for microcontrollers
I was working on a project where I read a value from a 16-bit ADC and scaled it to obtain the reading of a sensor. For example:
uint16_t reading = sampleSensor();
uint16_t temperature = reading/0xFFFF*2.5*1000*2;
Where the ADC has a full scale value of 2.5 volts with output code 0xFFFF and my sensor has a response of 2 °C/mV output.
Obviously when this code runs, it doesn't work. The value of temperature jumps from 0 to 5000 when reading increases from 65534 to 65535.
So I rewrote it and evaluated the scaling expression to a single multiplication:
uint16_t reading = sampleSensor();
uint16_t temperature = reading*0.076295;
This works as expected and temperature increases by 1 °C when reading has increased by 14 bits. I like writing out the full expression so I can keep track of the ADC bit count, VFS, and sensor gain, and I assumed the compiler would evaluate this arithmetic expression, before performing the operation on the reading variable.
Is this an issue with a compiler (GNU ARM v7.2.1) setting, or do I have a fundamental misunderstanding?
AI: This is not a compiler issue: doing the division first here is the legal behaviour, as division and multiplication have equal precedence and are evaluated left-to-right. (Also, when in doubt: use parentheses; there's no penalty.)
You are working with integers, so reading / 0xFFFF will always evaluate to 0 if reading is a uint16_t, unless reading == 0xFFFF.
If you want to use integers only, force the multiplications to be done first by using something like (reading * 10000) / 0xFFFF and make sure both the intermediate result (reading * 10000) and the result fit in the available bits (use uint32_t for such calculations).
Note that on MCUs without an FPU floating-point arithmetic is very slow and best avoided.
|
H: External ADC with Arduino
I am studying Arduino ADC and I have not much background in micro-controllers. Excuse me if this question is too basic.
I found that people use external ADC for better resolution as compared to the built-in ADC of Arduino. One thing which I could not understand was the bit resolution. I saw people using ADS 1115( 16 bit) with Arduino ( 10 bit). How does Arduino measure such resolution when Arduino Mega's resolution is 10 bit itself? What would be the point of using an external ADC with a resolution higher than that of Arduino Mega's ADC?
AI: An external ADC is its own "machine" that runs separately from the Arduino and after it measures the analog signal, it sends that measurement data digitally to the Arduino, which has nothing to do with Arduino's 10 bit ADC. The Arduino doesn't measure anything in this case; It just receives data.
External ADCs can be much faster, much higher resolution, and/or lower noise. They can also have all sorts of exotic bells and whistles (higher voltage, isolated, bipolar voltage, simultaneous sampling, etc). You can also have two ADCs with same speed and resolution but different conversion methods which have different benefits.
The ADC you find on an MCU is usually the cheapest, simplest ADC the designers could get away with.
|
H: Differential ADC reference plane - ground or midpoint?
The Microchip (ex-Atmel) ATSAMD10 has a 12-bit SAR ADC with the option to use differential input. I'm generating a virtual ground (\$V_\mathrm{mid}\$) at the midpoint (512 mV) of the reference voltage (1.024 V), with the input signal centered around that.
My question is regarding the reference plane for the analog section preceding the ADC. There is a simple 2nd order low-pass Sallen-Key for anti-aliasing, and a placeholder for another RC filter after the Sallen-Key. Should the plane beneath the analog section be ground, or \$V_\mathrm{mid}\$? My feeling is that it should be \$V_\mathrm{mid}\$ as the filters are all connected to that reference voltage. The only ground connections in that region are for the opamp power.
This is for a two layer board, currently with a ground pour on the bottom layer (green) with some \$V_\mathrm{mid}\$ traces through for routing. All the analog inputs are on the bottom of the microcontroller at the top left.
The system is not particularly demanding, but (notional, given the ADC's performance) LSB performance would be nice.
Source impedance: 18 \$\Omega\$
\$V_\mathrm{LSB}\$: 250 \$\mu V\$
\$f_c\$: 1.25 kHz
\$f_s\$: 4.096 kHz
AI: It (almost) doesn't matter what voltage the plane is at, as it will only be capacitatively coupled.
The important thing is to use the lowest impedance domain so it can rapidly absorb/source charge. This is usually your "ground".
12 bits, at your voltage and frequency...use ground and you should be good to go.
|
H: Inductance of air core coil decreasing due to ferromagnetic object put in it?
I have built a Colpitts oscillator which you can see in the schematic below. Note that the values for R1 and L2 are not correct since I use a self-wound air core coil for L2 which means I can only approximate the inductance with calculations. As for R1 I use a potentiometer to try multiple resistance values. However, the value given is in the range of values I tried.
The circuit outputs approximately 3V at 700kHz.
When I put an iron core or any other ferromagnetic object into the originally air core coil L2, the output frequency will increase and the voltage will decrease.
I thought that the frequency would increase since it is given by \$f = \frac{1}{2 \pi \sqrt{LC}}\$ and because the inductance should increase due to the iron core.
Could it be the case that the inductance decreases? If this was true why on Earth would this happen?
The same thinking applies to the voltage. According to the formula of voltage in LC circuits \$V(t) = -\omega_0 L I_0 cos(\omega_0 t + \phi)\$, the voltage should increase, but it decreases.
This all implies that the inductance is actually reduced by the iron core.
Can this be the case or is there some other explanation for this behavior?
AI: When I put an iron core or any other ferromagnetic object into the
originally air core coil L2, the output frequency will increase and
the voltage will decrease.
If the inserted iron or ferromagnetic object is not laminated or made from ferrite (which doesn't conduct electricity) then it will act like a shorted turn due to eddy currents and the net inductance of coil L2 will fall. This will cause frequency to rise.
This is why power transformers use laminated steel in their core construction and it's why ferrite rod antennas are made from ferrite material.
It's also why metal detectors can find gold and silver (well, any conducting material) - magnetic induction creates small voltages in the "target" metal and, these small voltages circulate eddy currents. The eddy currents create a change to the magnetic field and this is detectable.
|
H: Maximum sampling frequency of Arduino (internal vs external ADC)
I am studying the Arduino ADC.
I learned that the maximum sampling frequency of the Arduino ADC ( e.g. Due) is 1M sample per second.
As the Arduino ADC has only a 12-bit resolution I must use an external ADC with a 16-bit for higher resolution.
What is the maximum rate at which this Arduino can receive the sample signal from the external ADC? How can I find that?
In the case of Arduino's own ADC, it was 1M sample/Sec. But I don't know what can be the sample rate receiving limit for Arduino if it uses an external ADC. My knowledge is basic level so I am not able to understand the datasheet.
AI: The first thing to determine the data transfer speed is to find out what digital interface your external ADC actually uses. This is most often SPI or parallel interface.
For SPI, the speed is determined by the speed of the SPI interface on your microcontroller and ADC, whichever is slower.
For parallel, it is determined by the GPIO speed (or parallel interface if you have one) on your microcontroller.
Here's a tip: Do NOT use external ADCs until you are more familiar with the microcontroller and ADCs in general. External ADCs are are often a LOT of work which is why so almost every microcontroller comes with an integrated ADC and why many people use the "crappy" integrated ADC on the microcontroller. You can't achieve proper resolution anyways at 16-bits without some serious custom PCB design; Too much noise, inaccurate voltage references, etc.
|
H: Fundamentals: The role of battery in a circuit
As I understood, the analogy of a charge moving through a circuit is similar to water flowing from a high altitude to a low altitude (like waterfalls).
But in waterfalls there are two requirements if we wanted to keep that same water repeating the cycle, we need a gravitational field, which is caused by the earth, and an energy supplier that will do work on water to move it from the low altitude (low gravitational potential) to high altitude (high gravitational potential).
In circuits, I do understand that the battery does the part of moving a positive charge from a low potential energy point (the negative terminal) to the high one (the positive terminal) so it can -again- move to the negative terminal naturally.
But for that movement to occur (moving in the external circuit) we need an E field (in analogy of the waterfall we need earth to establish its gravitational field), so the battery should establish an E field, and give energy to charges in the wires to "re-climb" to higher potential energy points relative to that field?
AI: Yes, the charge carriers (electrons) are „elevated“ inside the battery to a higher potential. From there they can flow back through the wire to the lower potential where they came from originally.
|
H: capacitor unit is not specified
I am currently working on a circuit but I can't figure out the non-polarized capacitor values, since some of them are not clearly specified as unit magnitude.
Some of them are: 100, 500, 180, 5.6, 56, 220.
What should I use; pF, nF, uF, mF, F?
AI: Some of them are: 100, 500, 180, 5.6, 56, 220. What should i use; pF,
nF, uF, mF, F?
All the ones that I can see should be in pico farads: -
|
H: Speed regulation for universal motor
I'm building a grinder, which consists of a shaft on which the grinding wheels sit, and the shaft is driven by a universal motor out of a washing machine via a belt and a pulley.
I'm using an off-the-shelf SCR speed controller. What I'm finding on the first test run is that the motor slows down a lot when I, e.g., try to sharpen a chisel on the grinding wheel. I can turn the knob on the speed controller up to get it up to the speed I want while sharpening, but then as soon as I take the chisel away, the motor starts to spin up and I have to turn the speed controller down to stop it spinning out of control.
I realise that this is probably the expected behaviour for such a motor, and you may very well tell me that I'm using the wrong motor for the job, and that a motor such as those in my bandsaw, sander, and drill press is what I need (I don't know what that type of motor is called). But this is what I have, and I like using things that I have already, unless it really turns out to be too complicated.
So, is there a simple (or less simple, but not ridiculously complicated) way to make the motor run at a more or less constant speed regardless of the load? I'm open to mechanical or electronic solutions (the motor has a tacho coil), either off-the-shelf or DIY.
Many thanks!
AI: You can still use a SCR control, this would make your universal motor to run like DC powered, but only one half period conduction. The second semi-period is used to measure the BEMF from the motor and to set the phase angle.
Now probably you already have such control (post the schematics). If it doesn't work correctly it is possible due to incorrect setting (gain), or maybe yours is just too simple.
Another possibility is to add a tacho to the machine and then use a phase angle speed controller like TDA1085.
|
H: LED Status Readback Circuit
Sorry for such foolish question, but what is the behaviour of circuit below:
LED_ON is 3.3V logic control signal, LED_STATUS is output signal to MCU pin (GPIO).
When LED_ON is logic high and Q1 is open LED1 is ON and LED_STATUS = '0', but what will happen when LED_ON = '0'? Q2 will be closed, LED1 will be OFF, but what will be the state of LED_STATUS signal? Will it be +5V (logic high) or floating and why?
AI: If LED_ON is Low, Q1 will be off (not conducting), and LED_STATUS will be a little over 3 volts. With no current, R2 will pull the LED anode up to +5V. Due to the typical 1.8 volt forward voltage of the LED, LED_STATUS will be pulled above 3 volts - exact value will depend on whatever is connected to LED_STATUS.
By the way, your use of "open" and "closed" for the transistor is confusing, as an open switch does not conduct, and a closed switch does conduct - you are using "open" to mean "conducting"
|
H: Output compares triggering function not work on STM32?
I've been trying for 2 days now to get my Output Compare to trigger a function.
I've been reading up on it, and there is a callback function that, from what I understand, is triggered when an output compare is triggered:
HAL_TIM_OC_DelayElapsedCallback()
void HAL_TIM_OC_DelayElapsedCallback(TIM_HandleTypeDef* htim){
if(htim->Channel == HAL_TIM_ACTIVE_CHANNEL_2){
HAL_GPIO_WritePin(LD1_GPIO_Port, LD1_Pin, GPIO_PIN_SET);
}
if(htim->Channel == HAL_TIM_ACTIVE_CHANNEL_4){
HAL_GPIO_WritePin(LD1_GPIO_Port, LD1_Pin, GPIO_PIN_SET);
}
}
When I put a breakpoint on this function, I noticed that my program never entered it, even though I have 2 outputs compare that are triggered every second.
I don't know where to look anymore,
So I have my timer3 with channels 2 and 4 in output compare, in toggle on match mode.
My timer has a period of 1000 and my channel 2 has a pulse at 250 and channel 750.
The timer channels are connected to the pins of the 2 LEDs, when the OUTPUT compare is triggered, it turns the LEDs on and off at regular intervals.
My idea was that with the output compare callback function when an output compare is triggered, it turns on the LEDs, so there is no need to connect the timer channels directly to the PINS of the 2 LEDs.
/* USER CODE END Header */
/* Includes ------------------------------------------------------------------*/
#include "main.h"
/* Private includes ----------------------------------------------------------*/
/* USER CODE BEGIN Includes */
#include <stdio.h>
/* USER CODE END Includes */
/* Private typedef -----------------------------------------------------------*/
/* USER CODE BEGIN PTD */
/* USER CODE END PTD */
/* Private define ------------------------------------------------------------*/
/* USER CODE BEGIN PD */
/* USER CODE END PD */
/* Private macro -------------------------------------------------------------*/
/* USER CODE BEGIN PM */
/* USER CODE END PM */
/* Private variables ---------------------------------------------------------*/
TIM_HandleTypeDef htim3;
/* USER CODE BEGIN PV */
/* USER CODE END PV */
/* Private function prototypes -----------------------------------------------*/
void SystemClock_Config(void);
static void MX_GPIO_Init(void);
static void MX_TIM3_Init(void);
/* USER CODE BEGIN PFP */
/* USER CODE END PFP */
/* Private user code ---------------------------------------------------------*/
/* USER CODE BEGIN 0 */
/* USER CODE END 0 */
/**
* @brief The application entry point.
* @retval int
*/
int main(void)
{
/* USER CODE BEGIN 1 */
/* USER CODE END 1 */
/* MCU Configuration--------------------------------------------------------*/
/* Reset of all peripherals, Initializes the Flash interface and the Systick. */
HAL_Init();
/* USER CODE BEGIN Init */
/* USER CODE END Init */
/* Configure the system clock */
SystemClock_Config();
/* USER CODE BEGIN SysInit */
/* USER CODE END SysInit */
/* Initialize all configured peripherals */
MX_GPIO_Init();
MX_TIM3_Init();
/* USER CODE BEGIN 2 */
HAL_TIM_OC_Start(&htim3, TIM_CHANNEL_2);
HAL_TIM_OC_Start(&htim3, TIM_CHANNEL_4);
/* USER CODE END 2 */
/* Infinite loop */
/* USER CODE BEGIN WHILE */
while (1)
{
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
}
/* USER CODE END 3 */
}
/**
* @brief System Clock Configuration
* @retval None
*/
void SystemClock_Config(void)
{
RCC_OscInitTypeDef RCC_OscInitStruct = {0};
RCC_ClkInitTypeDef RCC_ClkInitStruct = {0};
/** Configure the main internal regulator output voltage
*/
__HAL_RCC_PWR_CLK_ENABLE();
__HAL_PWR_VOLTAGESCALING_CONFIG(PWR_REGULATOR_VOLTAGE_SCALE3);
/** Initializes the RCC Oscillators according to the specified parameters
* in the RCC_OscInitTypeDef structure.
*/
RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_HSE;
RCC_OscInitStruct.HSEState = RCC_HSE_ON;
RCC_OscInitStruct.PLL.PLLState = RCC_PLL_ON;
RCC_OscInitStruct.PLL.PLLSource = RCC_PLLSOURCE_HSE;
RCC_OscInitStruct.PLL.PLLM = 15;
RCC_OscInitStruct.PLL.PLLN = 144;
RCC_OscInitStruct.PLL.PLLP = RCC_PLLP_DIV2;
RCC_OscInitStruct.PLL.PLLQ = 5;
if (HAL_RCC_OscConfig(&RCC_OscInitStruct) != HAL_OK)
{
Error_Handler();
}
/** Initializes the CPU, AHB and APB buses clocks
*/
RCC_ClkInitStruct.ClockType = RCC_CLOCKTYPE_HCLK|RCC_CLOCKTYPE_SYSCLK
|RCC_CLOCKTYPE_PCLK1|RCC_CLOCKTYPE_PCLK2;
RCC_ClkInitStruct.SYSCLKSource = RCC_SYSCLKSOURCE_PLLCLK;
RCC_ClkInitStruct.AHBCLKDivider = RCC_SYSCLK_DIV1;
RCC_ClkInitStruct.APB1CLKDivider = RCC_HCLK_DIV4;
RCC_ClkInitStruct.APB2CLKDivider = RCC_HCLK_DIV2;
if (HAL_RCC_ClockConfig(&RCC_ClkInitStruct, FLASH_LATENCY_3) != HAL_OK)
{
Error_Handler();
}
}
/**
* @brief TIM3 Initialization Function
* @param None
* @retval None
*/
static void MX_TIM3_Init(void)
{
/* USER CODE BEGIN TIM3_Init 0 */
/* USER CODE END TIM3_Init 0 */
TIM_ClockConfigTypeDef sClockSourceConfig = {0};
TIM_MasterConfigTypeDef sMasterConfig = {0};
TIM_OC_InitTypeDef sConfigOC = {0};
/* USER CODE BEGIN TIM3_Init 1 */
/* USER CODE END TIM3_Init 1 */
htim3.Instance = TIM3;
htim3.Init.Prescaler = 16000;
htim3.Init.CounterMode = TIM_COUNTERMODE_UP;
htim3.Init.Period = 1000;
htim3.Init.ClockDivision = TIM_CLOCKDIVISION_DIV1;
htim3.Init.AutoReloadPreload = TIM_AUTORELOAD_PRELOAD_DISABLE;
if (HAL_TIM_Base_Init(&htim3) != HAL_OK)
{
Error_Handler();
}
sClockSourceConfig.ClockSource = TIM_CLOCKSOURCE_INTERNAL;
if (HAL_TIM_ConfigClockSource(&htim3, &sClockSourceConfig) != HAL_OK)
{
Error_Handler();
}
if (HAL_TIM_OC_Init(&htim3) != HAL_OK)
{
Error_Handler();
}
sMasterConfig.MasterOutputTrigger = TIM_TRGO_RESET;
sMasterConfig.MasterSlaveMode = TIM_MASTERSLAVEMODE_DISABLE;
if (HAL_TIMEx_MasterConfigSynchronization(&htim3, &sMasterConfig) != HAL_OK)
{
Error_Handler();
}
sConfigOC.OCMode = TIM_OCMODE_TOGGLE;
sConfigOC.Pulse = 250;
sConfigOC.OCPolarity = TIM_OCPOLARITY_HIGH;
sConfigOC.OCFastMode = TIM_OCFAST_DISABLE;
if (HAL_TIM_OC_ConfigChannel(&htim3, &sConfigOC, TIM_CHANNEL_2) != HAL_OK)
{
Error_Handler();
}
sConfigOC.Pulse = 750;
if (HAL_TIM_OC_ConfigChannel(&htim3, &sConfigOC, TIM_CHANNEL_4) != HAL_OK)
{
Error_Handler();
}
/* USER CODE BEGIN TIM3_Init 2 */
/* USER CODE END TIM3_Init 2 */
HAL_TIM_MspPostInit(&htim3);
}
/**
* @brief GPIO Initialization Function
* @param None
* @retval None
*/
static void MX_GPIO_Init(void)
{
GPIO_InitTypeDef GPIO_InitStruct = {0};
/* GPIO Ports Clock Enable */
__HAL_RCC_GPIOH_CLK_ENABLE();
__HAL_RCC_GPIOA_CLK_ENABLE();
__HAL_RCC_GPIOB_CLK_ENABLE();
/*Configure GPIO pin Output Level */
HAL_GPIO_WritePin(GPIOA, GPIO_PIN_5, GPIO_PIN_RESET);
/*Configure GPIO pin : PA5 */
GPIO_InitStruct.Pin = GPIO_PIN_5;
GPIO_InitStruct.Mode = GPIO_MODE_OUTPUT_PP;
GPIO_InitStruct.Pull = GPIO_NOPULL;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW;
HAL_GPIO_Init(GPIOA, &GPIO_InitStruct);
}
/* USER CODE BEGIN 4 */
void HAL_TIM_OC_DelayElapsedCallback(TIM_HandleTypeDef* htim){
if(htim->Channel == HAL_TIM_ACTIVE_CHANNEL_2){
HAL_GPIO_WritePin(LD5_GPIO_Port, LD5_Pin, GPIO_PIN_SET);
}
if(htim->Channel == HAL_TIM_ACTIVE_CHANNEL_4){
HAL_GPIO_WritePin(LD5_GPIO_Port, LD5_Pin, GPIO_PIN_SET);
}
}
/* USER CODE END 4 */
!!! Note that my program doesn't initialize the LEDs, for the moment I just want to see in debug mode if my program enters the function when a compare output is triggered, and already that, it doesn't work...
EDIT :thank you for your answers, by modifying HAL_TIM_OC_Start(&htim3, TIM_CHANNEL_2); by HAL_TIM_OC_Start_IT(&htim3, TIM_CHANNEL_2); and by activating the interrupts HAL_NVIC_EnableIRQ(TIM3_IRQn); my program enters the HAL_TIM_OC_DelayElapsedCallback function.
On the other hand, by wanting to create the RQHandler function,
void TIM3_IRQHandler(void){
HAL_TIM_IRQHandler(&htim3);
}
I get a first defined here error, I don't know if this function is useful for the rest of my problem:
Now that my program enters the callback function of the output compare, it doesn't seem to compare, I explain myself:
I thought the lines:
if(htim->Channel == HAL_TIM_ACTIVE_CHANNEL_4)
if(htim->Channel == HAL_TIM_ACTIVE_CHANNEL_2)
were going to check if the output compare of channel 2 or 4 is enabled, and if so, do what's in my condition. But, that's not what happens.
My program always enters the conditions.
I did a test to verify this:
I changed the value of my comparative output from channel 4 to 1500. The period of my timer is 1000, so this compare output should never be triggered. But it still triggers, my program enters the condition : if(htim->Channel == HAL_TIM_ACTIVE_CHANNEL_4) when it should not.
I don't know why, I still feel like I'm forgetting something lol
AI: There are (at least) 2 things missing from your code, both related to interrupts.
You need to enable interrupts for the timer you're using:
HAL_NVIC_ClearPendingIRQ(TIM3_IRQn); // make sure that any pending interrupt is cleared
HAL_NVIC_EnableIRQ(TIM3_IRQn); // Enable the interrupt
You need to create an Interrupt Handler function which calls the STM HAL library interrupt handler code:
// Timer 3 Interrupt Handler
void TIM3_IRQHandler(void)
{
HAL_TIM_IRQHandler(&htim3); // call the STM32 HAL Timer IRQ Handler for tim3
}
|
H: Data Sheet Inconsistencies with TI LM2672N
I'm a novice with Electrical Engineering, and am having a difficult time coping with inconsistencies in a particular data sheet.
Texas Instruments LM2672N-5.0 Data Sheet
Like many data sheets, TI provides example circuits for using their chip. However, unlike other data sheets I've seen (so far), it's riddled with inconsistencies, such as capacitor sizes, placement, etc:
Even within the same diagram, there are inconsistencies:
How should one interpret these diagrams and recommended component values?
AI: Example schematics are not intended to be copied verbatim, necessarily, from the datasheet. They can be copied from the datasheet, but also an understanding must be used as it is the designers responsibility to understand what the design does.
However, this datasheet is not great, and it looks like someone fell asleep on the job. The lower schematic should have the same value for the inductor as the value listed on the example and they even give a part number which has a value of 68uH (not 47).
The input caps value doesn't matter as much as it's just a bypass cap. But the output caps value does matter as it affects the feedback loop and whether the converter functions, so they should have not created uncertainty on that one.
One could contact TI on their errors, but they are very poor on customer feedback and really don't care from the experiences I've had.
I would use their TINA tool to simulate the converter to give an idea if it will function or not (DC DC converters can have issues with PCB parasitics and may not work with poor PCB layouts.)
|
H: Why do these capacitors have an additional plastic block around them? What kind of capacitor is this?
Just saw these type of capacitors for the first time and I'm curious :
Why do they have this additional block of plastic around them? Some kind of shielding?
Is there some specific reason they didn't just use the normal "can" type of electrolytics?
What type of capacitors are these? Are they still electrolytic?
AI: Why do they have this additional block of plastic around them? Some
kind of shielding?
The plastic is to make them flat so they can be placed by SMT assembly machines. Electrolytic capacitors are round and SMT machines need something flat to work with.
Is there some specific reason they didn't just use the normal "can"
type of electrolytics?
They are a normal 'can' but the pads are SMT pads, it is an alternate way of packaging them. The other way that is typically seed to package electrolytics for SMT purposes looks like this:
The two SMT pads are below and there is a flat top which makes it easier for vacuum chucks to move it.
What type of capacitors are these? Are they still electrolytic?
Most likely electrolytic.
|
H: Simulating High-Side N-Channel Bootstrapping in LTspice
I'm working on understanding bootstrapping, so I copied the schematic used in this YouTube tutorial in LTspice. However, my simulation doesn't appear to yield the results that I would expect from a properly bootstrapped circuit. While the gate-source voltage appears to be properly switching between zero and (approximately) twelve volts, the load doesn't have twelve volts across it, which is kind of the point, as I understand it...
I noticed that the content creator said repeatedly in the comments that this circuit is not intended for high-frequency switching, so I made the switch toggle very infrequently (once per second), but this didn't appear to fix the problem. I also added a gate-source capacitance to model the internal capacitance of the transistor and made sure that the bootstrap cap was ten times this value in accordance with rules of thumb, but this didn't help.
At this point, I can't tell if the problem is my implementation of the schematic or my simulation parameters (.tran 4). Does anyone have any ideas?
AI: The main thing I notice when looking at your LTspice schematic is that none of the semiconductors (MOSFET, BJT, and diode) have a specific part number defined. What this does is force the SPICE engine to use all default parameters for these devices. For a BJT and diode under generic use, it's usually not that big of a deal. However, with the MOSFET there is a huge problem. The default MOSFET models are for modeling integrated circuit (i.e. "monolithic") MOSFETs. First, they are not suited for modeling discrete VDMOS FETs, due to them having a completely different structure. Second, their default value for the Vto parameter (zero-bias threshold voltage) is set to zero, which is likely where your specific problems are stemming from.
To fix this, right-click on the symbol for the NMOS and then click the Pick New MOSFET button. A list of parts shows up. Since the video you linked shows an IRF1405 stuffed in the breadboard, let's try to find that. Click on the Part No. column header as shown to sort by part number.
Then, we can scroll to find the IRF1405 listed in alphabetical order. We can either double-click the line for IRF1405, or single-click it and then press the OK button to select it.
A good rule of thumb is to always avoid default semiconductor models, not just the MOSFET ones. You do that by always selecting a part number, even if you just need something generic to get the thing rolling. Using parts with non-default parameters can help avoid convergence problems and also model more real-world effects. Whenever I make a simulation using discrete semiconductors which have a non-specialized purpose, I typically select the following built-in LTspice model for each device type:
Silicon PN Diode -> 1N4148
Schottky Diode -> BAT54
2.0V-ish LED -> QTLP690C
3.5V-ish LED -> NSCW100
NPN BJT -> 2N3904
PNP BJT -> 2N3906
N-chan JFET -> 2N3819
P-chan JFET -> 2N5460
N-chan VDMOS FET -> 2N7002
P-chan VDMOS FET -> BSS84
Therefore, I would also change your NPN transistor to 2N3904 and your silicon diode to 1N4148.
|
H: Non Inverting Op Amp output near power supply voltage when input voltage is at 0V
I have a simple non inverting op amp circuit.
I am using this to amplify a DC voltage. The positive input voltage will vary from 0V-5V.
My problem is that regardless of the input the output is about 22V.
The exact op amp I'm using is here.
Datasheet here
It seems to me that regardless of the input voltage that the output is the saturated max voltage based on the specs.
I'm 100% positive that the op amp is connected correctly, as I've checked the connections too many times to count. I also rebuilt the circuit on a different part of the bread board to be sure.
Voltage at the negative input = 3.6 V
I also replaced the op amp with a different op amp, datasheet here and got similar results where 0V into the positive input resulted in about 22V output.
AI: Page 5 of the datasheet shows that this is NOT a single supply amplifier and that Vin may not approach closer than 3V typical or 6V worst case to ground or V+
Your Vin = ground is violating this common mode specification
Your alternative op-amp has the same issue.
The OPA544 opamp - datasheet here - see datasheet page 2 - has a common mode range +/- 4V typical and +/- 6V worst case inside the supply rails. Vin should ideally be at least 6V above your current system ground.
|
H: Must an LCD panel meet specific requirements to support a higher refresh rate?
Newer LCDs can refresh the screen at 120Hz or higher. The driver circuitry for the panel is responsible for pushing frames to the panel at a given rate, but are there any particulars of the LCD panel itself that factor into achieving the higher refresh rate? Are there, for instance, any special materials, construction, or circuitry of the panel needed to correctly respond to a higher refresh rate? In other words, is it theoretically possible to drive a panel over its specified refresh rate using another driver board, assuming all interfacing requirements are identical?
AI: The driver circuitry for the panel is responsible for pushing frames to the panel at a given rate, but are there any particulars of the LCD panel itself that factor into achieving the higher refresh rate?
Since the panels are big 2D arrays of pixels addressed similarly to a DRAM cell, the LCD itself must be able to clock at the pixel rate you're driving. If its internal capacitances are too large or its transistors too slow, there won't be time for the pixel to update before the next value is driven into the array.
In addition, since the pixel array is essentially an optical digital to analog converter, there are also sampling rate concerns. Driving the array at a higher pixel rate is equivalent to increasing the temporal sampling rate driving each pixel. Increasing the update rate while keeping the analog impulse response of each pixel unchanged is equivalent to increasing the sampling rate of a DAC without also adjusting the analog bandwidth of the electronics it is driving. Eventually your analog bandwidth falls far enough below the Nyquist (folding) frequency that although you are updating the pixels more frequently, no new information is actually being modulated onto the pixel output because it simply cannot respond faster (even if the array itself can physically sustain the pixel rate).
So yes, there are two factors. The array must be able to shift values at the pixel rate you've selected and the pixels themselves must be fast enough (have sufficient bandwidth) that the faster refresh rate actually does something.
In other words, is it theoretically possible to drive a panel over its specified refresh rate using another driver board, assuming all interfacing requirements are identical?
Yes, and you do occasionally see manufacturers offering displays at refresh rates above the nominal spec value for the panel they are using.
|
H: How to correctly connect low current analog ground plane with high current analog ground plane on PCB?
I am currently designing a PCB for a PWM controller based on the LMC555 timer chip along with a low side gate driver FAN3100TSX and N channel power MOSFET. Below is the PCB in question. In my design, I have separate ground planes, one on the left side is for low power analog section and on the right side is high current analog section. I have created a common connection for ground planes at the top middle (at pin 1 of J1 ). Essentially this would be my star ground connection. Additionally, the gate driver FAN3100TSX (IC2 on board) requires pin 2 to be common ground for chip for both control signals and power signals. Because I'm using the driver in non-inverting mode, its IN- pin (pin 4) needs to be grounded. This IN- counts as control signal pin. Thus I have made a connection between pin 2 and pin 4 to create common signal ground for the IC. I have also connected pin 2 to the high current ground plane on right because of return currents when it drives the Power MOSFET. So my question is, does this connection between these two pins create a ground loop and cause problems? Is there a different way to do this and create a star ground? Any suggestions. Thank you
This is whole PCB:
Zoomed in at J1:
Zoomed in at IC2:
AI: The question that needs to be asked is: "Do I need separate planes?"
The answer is usually "No"
Here is why:
By separating the grounds, there is only a small piece of copper in between the two planes. This copper has inductance and resistance, so what is done in this design is like putting a small inductor and resistor between the two planes.
The resistor made from the trace will be about 1mΩ with maybe 1nH of inductance, which will make a filter on the ground for the left half of the board. Any current moving from the left hand side of the board will flow through this point and create a small voltage through the resistor/inductor trace.
The biggest problem will be IC2 (blue) which the signals connected to it are referenced to the right half of the board and it's ground is referenced to the left half of the board so any difference in voltages on the grounds, which could be in the mV's depending on the current.
Another problem is the entire ground plane forms a nice dipole antenna, which could also create EMI problems or ring at high frequencies.
So unless there is a good reason, don't split the planes and combine them and let everything be the same potential. The best way to deal with return currents is with board layouts that rout the currents to the right paths (currents take the path of least impedance (resistance) or the orange lines. And it looks like the current's won't cross anything important (if they do a chip that is ground referenced might see the currents, V=IR and ground planes have resistance).
If you want to eliminate a ground loop (like if the connectors are connected to other devices) then isolation is the best (like optocouplers or digital isolators)
|
H: Discharge rate of a motor run capacitor?
I have a capacitor on a pool pump, specs : Motor Run Capacitor Round 30 uf MFD 370 Volt VAC 12717
As I understand even with electric circuit the pump is on turned OFF, the capacitor can still carry electric charge. How do I calculate the time I need power off so to ensure the natural leakage of capacitor will completely be depleted ? ( I do not prefer short circuiting the cap. to release the electric )
Thanks.
AI: If it's installed in the motor, then it discharges in a fraction of a second, when the power is turned off.
Here is a circuit diagram from researchgate:
You can see the run capacitor is shorted out by the motor coils.
If the capacitor is out of the circuit, it will effectively never discharge itself. The polypropylene capacitors used for motors have such low leakage they can remain charged for months
Also, in my experience, they can charge themselves from static electricity, so if you find one on the shelf or buy a new one, it might give a nice spark when shorted, or bite your fingers.
You're right to avoid using a screwdriver - it damages the screwdriver and probably the capacitor too. To safely discharge it you need a high value resistor, 100 kOhm should do. 30 microFarad and 100 kOhm will discharge most of the way in 10 seconds. You could also improvise with a wet rag or a green twig from a tree, give it several seconds, then short circuit to be sure.
In most electronic products large high voltage capacitors will have bleeder resistors permanently installed, to safely discharge them when the power is off. I haven't seen these in small motor capacitors. Perhaps because the motor discharges them anyway. Don't install your own bleeder without first learning about high voltage resistors, the common 1/4 W type are ok for a once off discharge but are not rated for hundreds of volts.
|
H: Manually Use LEDs on RJ45 Connector
I am using RJ45 connectors(2-406549-1) with built in LEDs, to transmit SPI lines for a microcontroller project(I needed CAT 6 cables for twisted data/ground pairs).
I'd like to use those built in LEDs to notify when the PCB has power, but I can't find any information on what those LEDs are to determine how I can power them.
Any ideas where to find that? I'm guessing RJ45 connectors with LEDs must have a standard LED type but I can't find what that is.
AI: From the datasheet:
you can see the LED pinout. For your module, 406549-1 the table to the right on the datasheet says these are yellow and green. Yellow LEDs typically have a \$V_f\$ of around 2.1 V; green LEDs typically around 2 V. The datasheet doesn't give any other information, but you could almost certainly drive 5 mA through them to get decent brightness. If you're running a 3V3 supply, a 220 \$\Omega\$ resistor in series would be fine.
|
H: Terminology clarification for transformer with a third primary tap
I am looking over the reference design from Microchip app note 46102, page 4, that shows a reference schematic for the M90E26 energy meter. They are using a transformer that has a series tap along the primary winding to pull 35Vrms for rectification before passing through a 3.3V linear voltage regulator. What do you call such a transformer? A search on digikey has yielded nothing, and the part number seems to be long out of production. Does anyone know which search parameter you might use to find such a device?
AI: It's likely a custom transformer for this application. There is no problem getting a transformer made with the windings and materials you request, provided you buy some thousands and/or pay a premium. In this case there is a tap brought out that has just a few turns (probably fewer than 10) between pins 1 and 3.
I would wager most transformers sold are custom. Ones you can buy from distributors tend to be very expensive, and switching supply transformers are rarely available at all.
If you wanted to make just one or two units you could substitute two transformers, one for the supply for the meter chip and and one dual winding one for the two isolated supplies.
In any case, the transformer is a mains power transformer with dual secondaries and a tapped primary. A transformer made for dual voltage US/Japan would be close but the tap would be more like 20VAC, which is too high for the 3.3V supply.
The actual transformer they used (from an older version of the app note) appears to have Chinese hanzi on it:
This (from maker 裕正 Electric in Hangzhou) may be the actual manufacturer, or a competitor:
|
H: How do I create a 1-bit full adder that outputs a 2-bit sum?
I am trying to build a 1-bit full adder that outputs a 2-bit sum.
I know that the standard 1-bit FA outputs a 1-bit sum and a carry bit, but I was wondering how can I modify the FA such that the carry bit output is discarded and the only output is a 2-bit sum.
AI: You could feed the carry bit to another full adder that has zeroes on its other two inputs, but that seems kind of pointless because it will just get you a copy of the carry bit.
The carry bit is the second bit.
If you don't need a carry input you can just use a half adder.
|
H: How do I calculate the transfer function of this basic terminated RC filter?
I'm reading through an electronics book to teach myself, and I'm in the section about filters. I've been following along so far, but now I'm confused about how the author came to a conclusion in his math. Here is an excerpt from the book (Practical Electronics for Inventors, 4th edition, page 213):
He is computing the transfer function for the circuit in the image, and he says that:
$$
H=\frac{V_{out}}{V_{in}}=\frac{1/(j\omega C)||R_L}{R+[1/(j\omega C)||R_L]}
$$
This makes sense to me, since this is just computing \$V_{out}\$ using the voltage divider equation. The next step is what confuses me, where he says that the equation above is equivalent to:
$$
= \frac{R'}{1+j(\omega R'C)}V_{in} \text{ where } R'=R||R_L
$$
How do I come to this conclusion? I've tried simplifying the circuit using Thevenin's theorem by combining \$R\$ and \$C\$, which gives me:
$$
R_{THEV} = R||\frac{1}{j\omega C} = \frac{R/(j\omega C)}{R+1/(j\omega C)} = \frac{R}{1+j\omega RC}
$$
That's pretty close to the author's answer, except I have \$R\$ instead of \$R'\$. I've searched around and haven't found other resources making this leap. I'm a little lost! The book continues making use of this throughout the filters section, so I really want to understand it. Any help here would be greatly appreciated.
Edit: The answer by Paul below solved the problem. The numerator of the equation in red should be \$R'/R\$ and not \$R\$. I expanded both the green equation and the corrected red equation and got a matching result.
Expansion of green equation:
$$
H=\frac{V_{out}}{V_{in}}=\frac{1/(j\omega C)||R_L}{R+[1/(j\omega C)||R_L]}V_{in}
= \frac{(R_L/(j\omega C))/(R_L + 1/(j\omega C))}{R+(R_L/(j\omega C))/(R_L + 1/(j\omega C))}\\
= \frac{R_L/(R_L j\omega C+1)}{R+R_L/(R_L j\omega C+1)}
= \frac{R_L}{R(R_L j\omega C+1)+R_L}
= \frac{R_L}{R+R_L+j\omega CRR_L}
$$
Expansion of corrected red equation (with \$R'\$ replaced with \$R||R_L\$):
$$
\frac{(R||R_L)/R}{1+j\omega (R||R_L)C}
= \frac{\frac{RR_L}{R+R_L}/R}{1+j\omega C(\frac{RR_L}{R+R_L})}
= \frac{R_L}{(R+R_L)(1+j\omega C\frac{RR_L}{R+R_L})}
= \frac{R_L}{R+R_L+j\omega CRR_L}
$$
AI: There is a glitch in the book answer. The expression framed in red should have R'/R in the numerator not R'.
Remember also that Vout /Vin has no unit and the expression framed in red is in ohm.
This wont affect the book result for the cutoff frequency so it makes sense to mention this analogy.
And if you want to apply Thevenin do not forget to replace Vin by Vth also.
|
H: BODLEVEL Fuse Coding
I'm using ATmega1280, and I have a question about this BOD table:
what is exactly the meaning of "Min. Vbot" and "Max. Vbot"?
AI: It means that the exact BOD voltage is a little imprecise and can vary within those limits. Take the strictest limit into consideration considering your implementation.
For example, if MCU is directly on battery and with BOD=1.8V, it will go into reset at some voltage between 1.7V and 2.0V (same voltage every time for the same MCU, but different for different MCUs, all between 1.7V and 2.0V), and you should consider 2.0V your real BOD voltage as you don't want to have half devices function at V=1.9V and half not. You aim for guaranteed predictable behavior.
|
H: Can you explain this basic circuit to me? Why is the transistor switch off?
I am pretty new to the world of electronics so apologize yet again for a very basic question.
I am trying to understand the functioning of a transistor, but my confusion may underpin some more fundamental confusion about voltage. I think understanding this circuit will help me grok the fundamentals.
When the button is closed like in picture, the light is off. I actually built it and verified that when it's closed, the voltage between base and emitter is actually zero, and the voltage across the 1k resistor is around 0.8V. I also found that when increasing this resistance to around 5k or more, the transistor is on whether the button is pressed or not.
I do not understand why is the voltage 0 at the base and why it's always on when the bottom resistor increases. The explanation in the text doesn't make sense. Could anyone help me understanding this?
Thanks
AI: Maybe it helps if I re-draw the circuit and annotate some values of voltages and currents.
The circuit on the left is the situation when the switch is open and the LED is on.
The circuit on the right is the situation when the switch is closed and the LED is off.
simulate this circuit – Schematic created using CircuitLab
Note how adding R5, the 1 k resistor, makes the voltage at the base of Q2 drop so low that there's only 0.8 V left for both the Base-Emitter of Q2 + LED D2. That 0.8 V would be enough for only the NPN (an NPN needs about \$V_{BE}\$ = 0.6 V to do anything but there's also the LED D2. A LED needs (depending on the type) at least about 2 V do start conducting. So that 0.8 V is not going to make any current flow into the base of Q2. So Q2 stays off and so does the LED.
|
H: Does it matter if the shunt resistor in an audio Line Out L-Pad attenuator is 6 feet away from the series resister?
I am designing a 14dB L-Pad attenuator to be buried in a guitar instrument cable as per below.
The placement of the 22K ohm series resistor is to go inline inside the amplifier side of the 1/4" mono plug. I would like to place the 5.6K shunt resister parallel inside the other cable end.
I could try to get them both at the amp side of the cable but that is a tight squeeze in the small small space inside the plug.
Does the placement of the shunt at the other end of the 6-foot cable change the tone or have any other detrimental impact?
Specifications
Pad = -13.89dB
R1 = 22K, R2 = 5.6K
Circuit impedance = 27.569K ohms
Vout = 1.223V rms, Vin = 0.247V rms
Cable: 6' instrument shielded. Two parallel for stereo.
Amplifier Line Out Z= 600 ohms.
Effects pedal input is 1M ohms.
There will be two cables since this is a stereo in/out setup.
AI: Does the placement of the shunt at the other end of the 6 foot cable
change the tone or have any other detrimental impact?
No, I don't believe this will be a problem.
6 foot (say 2 metres) will have a capacitance of around 200 to 300 pF so, worst case low pass roll-off will occur at: -
$$f = \dfrac{1}{2\pi R C}\hspace{1cm} = \text{ 24 kHz}$$
But this assumes the 5k6 resistor is open circuit so, in reality it will be circa 100 kHz and not an issue for audio.
|
H: How much current Hold and WP SPI Flash pins can draw?
In my design J14 is used as header to which I connect wires to an external programmer (DediProg) in order to program an SPI Flash (U25).
At the time of flashing, I don't want to supply voltage to (3V3AUX) gives power to a Host Controller which needs this flash ROM for functioning properly.
I want to use BAT54WS Schottky diode in order not to supply 3V3AUX voltage at the time the DediProg programmer is ON (VCC_DEDIPROG is high) and supplying voltage:
I'm using PM25LV080B Flash and I was wondering if the flash can draw so much current, in a way that the voltage on the diode would be too high such that the remained voltage would not be enough for the SPI flash to stay working.
According to the table and datasheet, there is no information about how much current WP# and HOLD# pins can draw. I only know that VCC draws up tp 30 mA.
at 30mA the voltage drop on the diode would around 500mV, which would leave 3.1V for the flash VCC. how do I know that the WP# and HOLD# pins don't pull the VCC voltage below 2.7v which is below the working range?
AI: Unless called out explicitly, input pins will only draw up to ILI = 1 µA, as long as the applied voltage remains between the supply rails.
|
H: Rechargeable Li-ion battery pack
I'm trying to power up an ESP-01 with a Li-ion battery pack recharged by a solar panel (12 V).
The question is: can I connect the solar panel directly to the battery pack with a voltage regulator, or do I need something else as protection?
AI: A voltage regulator on its own won't do.
In short, you need a proper Li-ion charger with load-sharing, preferably with some form of Maximum Power Point Tracker to get the most out of your solar panel.
It is also advisable to add a Battery Management System to your battery pack, if it doesn't already have one, if only to make sure the batteries don't discharge too far.
ICs and modules exist that can do all or most of that. We don't do product recommendations here, but with the search terms above you should be able to find integrated solutions.
Li-ion batteries need to be charged with a specific algorithm (constant current first, then constant voltage, until the current has dropped below a certain value). A voltage regulator can't do all that.
You need load-sharing because the charging algorithm won't work properly when a load is connected directly to the battery.
You may need a form of MPPT because the solar panel won't deliver its full power otherwise, if it is not well-matched to the battery.
|
H: Efficiency of a switching regulator
I have a 24V power supply line which I would like to transform to 5V (<1A) in order to supply my chips. Voltage regulators would waste a lot of energy in this range. I looked at some datasheets of such switching regulators e.g. TSR1-2450. On page 1 it is stated that an efficiency of 92% can be achieved.
How is the energy loss calculated?
I would understand it that way: Regulation from 24V to 5V@1A results in 5W maximal output. With 92% efficiency the circuit would need
5W*1.08 = 5.4W
AI: That's correct.
$$ P_{IN} = \frac {P_{OUT}} {eff} = \frac 5 {0.92} = 5.43 \ \text W $$
Be aware that the efficiency will drop off somewhat at lower loads.
|
H: finding corner frequency using .MEASURE in LTSpice
I wanted to find the corner frequency of this circuit :
I run the simulation but couldn't find the frequency using .MEAS AC f FIND frequency WHEN V(o)=-3dB command.
I am getting this error in SPICE Error Log :
Measurement "f" FAIL'ed
Date: Wed Jan 27 11:59:28 2021
Total elapsed time: 0.067 seconds.
tnom = 27
temp = 27
method = trap
totiter = 0
traniter = 0
tranpoints = 0
accept = 0
rejected = 0
matrix size = 3
fillins = 0
solver = Normal
I am able to find approx corner frequency at -3dB using AC analysis using this command .ac dec 1000 1 1Meg
AI: I modified the .MEASURE command. As -3dB doesn't make sense so I specified the absolute value of voltage V(o) i.e. $$-3 = 20\log_{10}(\frac{V_o}{V_i})\\
\therefore V_o = 0.707945 \space \space[\because V_{i}=1]\\
$$
and Final command would be .MEASURE AC f FIND frequency WHEN mag(V(o))=0.7079457843841379 which gives f=158.778 printed in View > Spice Error Log
Final Circuit :
|
H: Optimising a single pulse precision peak amplitude detector
So I made this precision peak detector circuit (with reference from EEVblog's video and experimenting with the resistor/capacitor values), with which I want to measure the peak amplitude of a single pulse of width 5us (50ns each rise/fall time at best and 155ns/90ns rise/fall time at worst) and the amplitude is variable and can be within 3mV to 3V. I would like the peak-detector to hold the voltage level for a sufficient amount of time (in this case around 60ms, although I have simulated up to 3s with the same result) without degrading the voltage level too much (a maximum of 1-2mV degradation over 60ms is acceptable for me).
So the result of the above circuit looks good in the simulation. When testing for the extremes I see the following (pics below). Here Green is the final output (voltage at R2), Red is D2 current(essentially to show the peak current the opamp can handle), Blue is the input pulse (V1):
For 3mV pulse, here are the results:
As, you can see there is a slight offset of around 0.6mV, which is acceptable for me as long it is fairly constant over all my measurements.
For 3V pulse, here are the results:
Here the peak current in diode D2 (or as I think is a measure of maximum current required from the opamp (U1) is around 79.6mA which is right below LTC6268-10's typical peak output current rating of 80mA.
R2 is basically modeling the input impedance of an ADC which I'll directly connect at the output of U2. I chose LTC6268-10 because of its low input bias current which 'should' help the capacitor to retain the voltage (in this double diode configuration as per the video) for a long time. I would also be using a low leakage analog switch/mux to discharge the capacitor (short it to ground) after I sample it using my ADC.
I need to be as sure as possible that this circuit would work as in simulation (or at least, close enough) before I put it in a PCB.
So my questions are:
Do you find any obvious mistakes that I am making here with my component selection (or their values) or the circuit itself? or do you suggest a better circuit to achieve my goal (of measuring peak voltages of a single pulse with an ADC)?
In the simulation the capacitor is holding the voltage very very well, which is good but I am thinking if there are any other ways the capacitor could get discharged (in the real world), and what should I take into consideration? Is there a specific type of capacitor, I should go for?
Is there a ready-made peak detector IC available already that would perform better than this circuit? I am not looking for S/H IC because I would require precise trigger timing to sample at the middle of the pulse.
Since I'll be using SMD components, if you would like, you may suggest anything important that I need to know regarding using SMD diodes, capacitors, etc for this particular circuit.
Side note: If the requirements are too high then I may be inclined to reduce the hold time (currently 60ms) to 6ms depending on the suggestions if that results in a simple circuit.
Thank you so much for reading my post.
AI: Do you find any obvious mistakes that I am making here with my
component selection (or their values) or the circuit itself?
Looking at the 1N4148 data sheet, the reverse leakage current is this: -
I know R3 is there to bootstrap the junction of the diodes to reduce leakage current but given that the inherent diode leakage is about 1000 times more than the input bias/offset currents for the op-amp, I would want to make sure that my model and simulation do take this properly into account.
Is there a specific type of capacitor, I should go for?
Ceramics are normally the best choice so I'd go for NP0/C0G characteristics and read the data sheet carefully about what leakages may present themselves.
Is there a ready-made peak detector IC available already that would
perform better than this circuit?
It's an off-topic question but I don't know of any.
The 1N4148 is really good on reverse recovery time (4 ns) but make sure you test the diode model with it set to 10 ns just to be sure. There may be some subtle effect here that drains a fraction of a percent of the output voltage on low level signals.
|
H: Significance of the center-tapped transformers in mixer circuits
How does center-tapped transformer work on the circuit given below?
In analog rectifier circuits, one of ports of the center-tapped transformer is connected to a sinusoidal source while others are driven with a DC source or ground. But in mixer circuits as shown below, the situation is a bit different.
In RF usage of the center-tapped transformers, #1 node is driven with a small signal, "RF"; #3 node is driven with a large signal, "LO"; and together they prensent in #2 and #4 ports.
For example on a single balanced diode mixer center-tapped transformers are used as shown below.
At high side of the LO, D1 conducts while at the lower side of the LO, D2 conducts.
What does transformer do to perform such a operation? How do center-tapped transformers work when there are two sinusoidal sources?
AI: By mixing low level currents of RF (uV) and LO levels to permit the exponential current of the diodes will create lots of distortion. It is found that the Intermediate Frequency (IF) products of LO-RF and LO+RF are their harmonics are quite useful.
If one chooses the lower difference frequency then chooses carefully designed specs for a low pass or bandpass with bandstops for the higher LO and product sum (LO+RF) , one can get a high-quality frequency mixer.
To demonstrate this I randomly chose RF= 200 MHz and LO = 300 MHz to produce an IF out = 100 MHz. For the filter I chose an unloaded Chebychev LPF with resonant notches at 200,300 MHz
Typical metrics for conversion efficiency are the ratio of amplitudes for IF/RF and LO suppression as well as dynamic range and noise threshold.
|
H: DC motor doesn't start until PWM duty cycle exceeds threshold but continues to run when duty cycle reduced below threshold
I'm using a DC gear motor (from this kit, plastic gears) driven by a L298N module controlled by a STM32 BluePill. The motor voltage supplied to the L298N is 9V from a variable voltage AC adapter with a max current of 1A. The STM32 is powered from USB. The STM32 and L298N share a ground circuit.
I'm using a PWM signal from the STM32 to the Motor A Enable pin of the L298N to control the voltage applied to the motor. The Input 1/2 pins are driven correctly to rotate the motor in one direction.
When I start with a low duty cycle to produce a low voltage, the motor does not start turning and simply makes a high pitched whining noise. As I increase the duty cycle and thus voltage, the whining noise increases in pitch and eventually at a specific duty cycle the motor will suddenly start turning at quite a high speed. Once it's turning, if I reduce the duty cycle below that threshold, the motor will continue turning albeit at a slower speed. If I reduce it further, the speed of the motor becomes uneven and eventually stops and the whining noise resumes.
I don't understand what's happening here. Am I not controlling it correctly? Is the torque produced by the motor at low voltage not high enough to overcome the friction of the gears and inertia of the motor?
Motor specs:
Operating Voltage: 3-6V DC
Reduction ratio: 1:48
When the voltage is 6V:
No-load current: 200mA
No-load speed: 200±10%rpm
When the voltage is 3V:
No-load current: 150mA
No-load speed: 90±10%rpm
AI: Your comment about the friction is a dominant one.
Static friction is invariably greater than dynamic friction, this applies to almost any two surfaces in contact, not just motors.
Initially when not rotating the various bearing surfaces will be in intimate contact with oil-film - that will require more force to initiate motion than to maintain motion.
There is also a characteristic of permanent magnet motors with iron rotors called cogging which makes it more difficult to start a motor than for it to continue operating once rotating.
When stationery the poles of the armature will be attracted to the magnets, it will require more force to rotate in this position than other positions of the motor - you can feel this if you rotate the motor by hand. Once running inertia will carry the motor through this position. Coreless motors have no iron in the rotor and do not suffer from this.
Cogging Torque In Permanent Magnet Motors
The inertia does not however affect the ability to start the motor, it only starts to have an effect once moving. It will affect how fast the motor comes up to speed but not whether it rotates. Inertia in small motors is usually insignificant in small motors and dwarfed by friction and cogging forces.
|
H: Voltage drop over MOSFET controlled by microcontroller
I made the circuit below to power a microcontroller and a load from one power source.
But looking at the voltage drop between drain and source of the IRLZ44N MOSFET, instead of 12 V I got only 3.1 V when the MOSFET is opened by ATtiny85's 4.5 V signal.
What am I doing wrong that lets 3.1 V drop instead of 12 V between drain and source?
AI: What do I do wrong to get 3.1 volts instead 12 volts between Drain and
Source ?
You are operating the IRLZ44N MOSFET as a source follower and, when this happens, the source voltage can never exceed the gate voltage and usually, it's a volt or two lower (as you see). Try operating the MOSFET as a common source with the load connected between drain and +12 volts.
I got only 3.1 volts when mosfet is opened by ATtiny85 4.5 volts
signal.
Best not use the term "opened" because this is electrical engineering and an open circuit means "no connection". If you were on a hydraulics site then you would call a valve open that allowed fluid to pass. Not in EE!
If you want to use a term use "activate" or "deactivate".
|
H: 20 dB/Decade - Justification
I've read that it is recommended to have 20 dB/Decade slope on the L(jw) magnitude plot of the Bode Diagram at gain crossover. This seems to imply a slope of -1 in dimensionless terms.
However, what is the justification for this? The reasoning for this always seems to be glossed over.
Also, in Astrom's text he claims the gain slope must be greater than -2 for stability. I have a tuned up system that as a gain slope of -2.3 and is still stable. Granted, it's barely stable, but there is phase margin left.
What is the justification for that statement of -2 for stability? Is -2 a rough number?
Thanks for any feedback and help!
AI: In a system with a single signal path through it, (a system with no parallel branches, so no lattice filters for instance), also known as minimum phase, a gain slope of -1 or 20dB/decade implies a phase shift of 90 degrees, see the Hilbert Transform.
This means that a closed loop system will have plenty of phase margin if the gain slope crosses 0dB at 20dB/decade. It's common to use rather more than that, up to 30dB total being not unusual in amplifiers and phase locked loops, 45 degrees phase margin will give a little overshoot, but it's fairly tame.
It's worth remembering that in PLLs, the fact that the VCO is controlled in frequency but measured in phase means that even with no loop filter, you have a 90 degree shift and a -20dB/decade gain slope. Adding a single lowpass filtering element will at some frequencies give you another 90 degrees, which with latency and strays become more than 90 degrees. No wonder that the first PLL put together by a noob is almost certain to be unstable (mine included!) and people often surround them with an aura of magic. This has led to my PLL design method which is (a) with no loop filter, set the loop bandwidth via the gain (b) add integrator element(s) as required, breaking away below the loop bandwidth (c) add lowpass elements as required above the loop bandwidth such that (d) the sum of all phase shifts from the loop filter at the loop bandwidth is 45 degrees or less.
A good reason for aiming well below -30dB slope with the elements you can control is that you'll always have some extra phase shift for free that you didn't intend, from stray capacitance, non-zero output impedance, finite gain.bandwidth products etc.
Once you get to -2, you have 180 degrees phase shift, and the system will become unstable. Less extreme than -2, and you have stability.
If you have a system with a -2.3 gain slope and some phase margin left for stability, then you have a non-minimum phase system, or a measurement or description error.
|
H: IC to use for multiplexing SPI chip select
I have a number of SPI ICs but limited GPIOs on my microcontroller, so want to multiplex the CS pins to something that requires only a couple of GPIOs. I use the predetermined clock, MISO and MOSI on the microcontroller.
My initial thought was a ring counter, but all the ones I have found set a single pin to high and the rest low. I need a single low and the rest high. I could use a shift register, but seems like overkill.
What is the best IC to use for this application?
AI: Use a shift register; there's nothing overkill with that (it's not "complexer" than a ring counter). You can drive it with another SPI peripheral of your microcontroller, or you can just bitbang it, since you will change that less often than you'll talk to your multiple peripherals. 2 additional pins.
Alternatively, I²C port expanders are a thing (2 pins), and I'd be surprised if you can't find a 1-wire-protocol device that does the same thing (which will be harder to use). A cheap microcontroller can be attached via UART to do the same (1 to 2 pins).
But honestly, if you're running out of GPIO for CS, it might be a good point to either switch to a microcontroller with more GPIO, or simply add a cheap second microcontroller which takes the complete SPI handling load from your main microcontroller and communicates with that e.g. via UART. A pretty original and comparatively cheap source for "microcontroller as IO expanders on speed" is using one of the ubiquitous Cortex-M chips that bring an SWD interface and just using that to manipulate the microcontroller as a "puppet" device. In PoC||GTFO 0x10, Micah Scott documented such SWD marionettes.
|
H: Smart load switch with inrush control and overvoltage lockout
After I finish making my dc-dc buck-bost converter, voltmeter and ammeter I decided to add a smart load switch to my power supply in order to have the posibility to connect electric motors, bulbs or any element that generete big inrush currents at startup.
Here is my write diagram. I am using the ltc7004 ic (https://www.analog.com/media/en/technical-documentation/data-sheets/LTC7004.pdf) as a load switch.
I have 2 question regarding to this:
1- How can I control INP pin without using a microcontroller. This pin is CMOS input compatible.
2- This switch will tolerate a short circuit? In the datasheet I not found anything.
AI: This switch has voltage protection features but not current or power protection. Thus "smart" is not an attribute they used for this IC.
It would not be wise to drive a 4A motor with a 40A start surge or locked rotor current unless you add current sensing to control Vinp to create a current limit and hysteretic response like PWM.
The device is not short circuit protected. Although Smart Switches do exist that have this feature. for example TPS27SA08 36-V, 10-A, Single-Channel Smart High Side Switch
|
H: 555 timer capacitor charging order
I have a question about the standard 555 circuit below. I understand to some degree the "insides" of the timer. In particular, I understand the role the comparators play and I am fairly certain I am able to follow the logic as the voltage going into pins 2 and 6 increases.
My main difficulty comes in understanding why the same voltage (+5V say) charging the capacitor isn't also going into pins 2 and 6. As it has been explained to me "the capacitor voltage is what is fed into the comparators," however this seems counterintuitive as in my mind there should be a Vcc voltage (+5V) fed into the comparators as well as the capacitor, and it seems to me that this voltage would drive the logic.
The explanation I've been able to convince myself of is that "Vcc is providing only as much voltage to the capacitor as the capacitor currently has" -- this however seems to not be in line with the explanation as there seems to be an implication that the capacitor is what is flowing into pins 2 and 6 and not the voltage straight from Vcc. If this is the correct explanation, then it appears I am missing some key intuition for the workings of voltage in such circuits and would greatly appreciate an explanation/reference.
My understanding of electronics is very basic, so I expect I'm missing something easy, or more fundamental.
AI: When the cap is discharged, you are in a situation similar to this:
simulate this circuit – Schematic created using CircuitLab
Since the cap is discharged, it is initially at zero volts. So what is the voltage at pins 6 and 2? Zero.
Of course, this starts changing as soon as current starts flowing into the cap. At first, all the voltage is dropped over the resistors, but as the cap starts to fill up, its terminal voltage starts to rise from zero. Now less and less current is flowing over the resistors, therefore less and less voltage is being dropped by them. Eventually the voltage across the cap (and therefore on pins 2 and 6) is high enough to trigger the comparators.
Think of it this way: The voltage can't get from the power rail to pin 2 or 6 without passing current through the resistors. Ohm's law tells us that current through a resistance drops voltage.
Let's look at the case when the capacitor has just finished discharging:
Image Source
Let us agree that when the capacitor is discharged, there is the same voltage on both its terminals. Let us also agree that the voltage seen at pins 2 and 6 is the same as the 'top' of the capacitor.
Now the capacitor starts charging as current flows from the 5 V source, through the resistors, and into the capacitor. Since this current is limited by the resistors, the rate at which the capacitor fills is called its RC time constant. i.e. This voltage rise is defined by the size of the resistors and the capacitance.
As the capacitor fills up, the voltage on the upper terminal starts to rise from ground. Remember, whatever voltage this terminal is at is the voltage that pins 2 and 6 are seeing.
Eventually this voltage is high enough to trigger the comparators, and the discharge pin gets grounded and discharges the capacitor, and now the whole thing starts over.
|
H: How to determine the power rating of an output LC filter
I am trying to design a Class-D Amplifier using the TPA3116D2 IC chip. Using the typical application circuit provided by the datasheet on Pg.26
Everything is going well with the support circuitry but what's getting me all stuck is the "OUTPUT LC FILTER" section.
I am not sure the power rating of what the components is suppose to be as its the ouput stage assuming a lot of current is going to be drawn.
Currently I have the amp setup to output a ~14.14VRms into a 4Ohm load, which should draw a ~3.54ARms
getting 50W out of it.
Does anyone know how to determine the power rating of the components mentioned?
AI: As the behavior of cored inductors will vary with the DC operating conditions and the spectral content of the applied voltage, it is difficult to provide a unique "power rating" for inductors. Capacitors are a bit simpler to evaluate in that respect, but the situation is similar.
Inductors: There are two main factors. These are the saturation (1) of the core material and the temperature rise (2). The inductor should also be able to withstand the voltage across it, but this is not usually a problem.
1)
The cores in inductors are used in part to contain the magnetic field created by the coils but mostly to increase the magnetic permeability of the magnetic circuit so that inductors can be made smaller and with less coils. However, as the magnetic field in the core increases, there comes a point where the permeability starts to decrease. This is the onset of saturation. Various materials have differing steepness in the "knee" of this relation, with ferrites tending to have a rather sharp decrease while iron powders display a more gradual decrease in permeability. Of course, reality is more complex as this saturation curve have a dependency on the frequency of the magnetic field so this curve is often measured near DC in datasheets. As a saturated inductor is not an inductor anymore (its inductance drops), this constitutes a first "power limit" because the magnetic field is proportional to the current in the coils.
In your case, low audio frequencies are near enough DC to make saturation a relevant constraint. So, knowing the impedance of your load at, say, 20Hz, compute the peak output current. The saturation current of your inductor shall be above that for good performance. Something like 5A in your case.
2)
The temperature rise factors can be approached as follows. Core materials have maximum allowable temperature and based on questions of reliability and safety, one should determine an allowed temperature rise. Then, depending on the desired efficiency, cooling available and maximum size, this determines a maximum power dissipation. I won't go into the details of estimating temperature rise for a given power dissipation in a given device, but rough estimates can be found in various application notes on heatsinking such as:
$$ \Delta T \approx \left(\frac{1000 P}{A}\right)^{0.83},$$
where delta T is the temperature rise above ambient in Kelvins, P is the dissipated power in Watts and A is the exposed area of the device in cm^2.
The dissipated power is the sum of the windings losses and the core losses. The winding losses are as a first approximation simply
$$ P_{Windings} = R_{Windings} I_{RMS}^2.$$
There are additional losses due to proximity effects and gap losses (if using a gapped inductor) but these can be rather difficult to estimate. However, these losses can be quite substantial in some case as in your case. Again, some estimates are obtained from application notes.
The core losses come are inherent to the core itself and depend on frequency. One first estimates the frequency content of the current in the inductor then uses the loss curves in datasheet to arrive at an estimate. In you case, you could estimate the frequency content of your signal as the carrier and, say, 1kHz, to represent the audio signal. Concretely speaking, most inductors will not really be lossy at the audio frequencies where the bulk of the power is concentrated for typical signal (unless you were to use an iron lamination core inductor intended for mains frequency applications which you should, of course, not do). This leaves the carrier. As those integrated IC use quite high switching frequencies, you will need low-loss core materials in the 0.1-10 MHz frequencies. Something like micrometal type-2 toroids or well built gapped (to prevent saturation) ferrites.
Thus, if you chose a low loss material with appropriate (look at tables) wire gauge for the current, it will be fine, power-wise.
Capacitor: The filter you display includes an RC snubber (which will also need a good low-dissipation capacitor!) that is important to damp the output filter under certain load conditions (such as no loads!). Other wise, the voltage peak might become quite high. If the filter is well-damped, then the voltage on the filter capacitors will essentially remain within the rails. So, use a safety margin and pick a voltage rating accordingly.
Then, the capacitor must tolerate the ripple current that will be applied to it. Audio frequencies should not pose too much of a problem, but the carrier frequency will be shunted through the capacitor. Compute the filter impedance at that frequency to estimate the RMS current through the capacitor and select one accordingly. Also a similar issue to core losses arise in capacitor as dielectric losses. Polyester film can get quite lossy in the MHz frequencies. Polypropylene is a good low-loss dielectric as are some ceramics (e.g.: C0G, NPO). However, for your power levels, a polyester film cap of the correct voltage rating will be sufficient.
Other issues: Not just the power rating is important here. You will want a minimum of linearity in your components as the amplifier topology is pre-filter feedback. You should ideally not use iron powder core materials intended for use in DC/DC converter. Use RF application toroids or gapped ferrite inductors. The physical construction is also important here so that the interwindings capacitance does not shunt the inductor at the frequencies of interest. Using a shielded inductor or laying toroids flat on a ground plane will go a long way to reduce emitted EMI.
|
H: RISC-V assembly lui?
In RISC-V assembly I wrote:
addi s0,x0,0x20000
Is this legal such that the assembler will prove the command and make it work right or I'm forced to change it to:
lui s0,0x20
Can someone kindly explain what lui does?
AI: No, this won't work because ADDI can only add 12-bit immediate values which are sign-extended to 32 bits. RISC-V is not like ARM where almost any immediate value can be shifted before it's used. Therefore with ADDI you can add 0x000-0x7FF or subtract 0x001-0x800. The limitation to 12-bit immediate values is because of the encoding of ADDI:
However, ADDI with x0 as the source register is valid for loading smaller immediate values, so you could do ADDI s0, x0, #0x123, for example. NOP is also implemented this way, and is just a pseudo-instruction for ADDI x0, x0, #0x0. Other forms of NOP (for example adding 0 to a register other than x0) are considered non-canonical and are not recommended because they may be redefined to be a different, meaningful instruction in the future.
As for LUI, it loads a 20-bit immediate value into the upper 20 bits of a (32-bit) register and fills in other 12 bits with 0's. Notice how you can use LUI to set the upper 20 bits of a register and then ADDI set the lower 12 bits, thus loading a 32-bit constant into the register with just two instructions.
|
H: Transfer function of two inputs?
Suppose I have a system with feedback. Consider the input \$d\$ that represents disturbances to the plant; also, consider the input \$V\$ that represents sensor noise on the feedback loop. I can calculate the closed-loop transfer function \$\dfrac{Y(s)}{d(s)}\$ and \$\dfrac{Y(s)}{V(s)}\$. Then, my question is: does it even make sense to talk about the transfer function \$\dfrac{D(s)}{V(s)}\$, if so, could it be calculated as \$\dfrac{d(s)}{V(s)}=\dfrac{d(s)}{Y(s)}\dfrac{Y(s)}{V(s)}\$?
Thanks!
AI: Unless the sensor noise and the disturbances are related somehow, there is no "transfer function" that exists between them. So no, it probably doesn't make sense.
There may be some sensible reason to calculate \$D(s)/V(s)\$, but I wouldn't call it a "transfer function", under pretty much any circumstances.
|
H: What is the specific reason for using FIFO in the asynchronous domain in VLSI?
I was wondering that the reason is of using FIFO in the asynchronous domain in VLSI.
Basically, to prevent x propagation in the asynchronous domain (aka CDC domain), it was taken care of by the two-stage F/F method for resolving the issue.
I think I can use the same idea to the multiple depth data path, but people still use FIFO in the CDC domain. Why?
I want to know what the specific reason is for using FIFO in the Cross Domain Clock.
AI: The FIFO read and write ports can be on different clocks. For systems with asynchronous clocks this solves the basic cross-domain data transfer problem, so long as the FIFO read and write pointers don't cross.
Even if the clocks are the same frequency, isolating large areas in separate timing domains using FIFOs or register slices (that is, shallow FIFOs) eases system timing closure, at the expense of latency. This is especially useful in large SoCs where maintaining clock phase alignment across a large die becomes very difficult, even with careful clock tree design.
For those FIFO boundary cases where flow control comes up (full, almost full, empty, etc), flag generation resolves the clock domain crossing by using special techniques, such as gray-coding, to ensure reliable full/empty calculations within each clock domain.
This cross-boundary flag synchronization adds latency however, so the designer must allocate extra FIFO room to guard against overrun. This can be by using an almost-full indicator, or by adding small FIFO called a skid buffer that catches the extra data emitted by the host during the flag latency time.
|
H: How to find SMD resistor and capacitor value and correct size
I'm new to the SMD world and struggling to find an easy and quick way to get the correct SMD resistor and SMD capacitor size and value based on a circuit schematic; any tips will help.
For example, I'm designing this circuit in SMD but don't know how to find the right SMD size, because there are many sizes - (I'm using Altium Designer for PCB design.)
If someone can guide me by looking at the resistors' and capacitors' values from the above diagram and tips on finding the correct size, it would help me further.
AI: Smaller sizes tend to work better at higher frequencies and are easier to fit into a good layout. Larger values tend to have high power ratings and (in the case of capacitors) have more stable capacitance under bias voltages. Larger are easier to solder (if you are doing this by hand) but take up more room.
If you are a beginner and soldering this by hand, 1206 or 0805 are good sizes to use. Smaller is not too much more difficult to work with, but no sense making things harder for yourself unless there is a good reason. If you are paying for assembly, and they can do smaller sizes, this may not matter.
When picking resistors, pay close attention to power ratings. Don't exceed the maximum dissipation. If you do, pick a bigger size.
When picking capacitors, pay close attention to the change in capacitance with voltage and the impedance at high frequency. For example, here is a random 1 uF 0402 capacitor:
And here is the same 1 uF but in 1206:
(images from: https://ds.murata.co.jp/simsurfing/mlcc.html?lcid=en-us)
Comparing capacitance at 5v, the larger package works much better, but it also has higher impedance above ~ 1 MHz. At higher frequencies, a larger package size may not be appropriate.
|
H: How to calculate impedance of a PCB trace Antenna in Altium w.r.t. a given frequency?
I am keeping things abstract as I intend to learn and not just copy another design.
So, suppose I am trying to design an antenna of impedance Z and frequency f.
Now, in Altium 18, there is a feature that suggests trace widths for a specified impedance.
Now, what I fail to understand is :
how is that the impedance is not dependent on trace length and only on trace widths ?
how can Altium suggest a trace width without me specifying the frequency ?
I mean to calculate trace
width from Impedance , one surely needs to know the working frequency . I guess, it is only taking
resistance in consideration but then , that would depend on the length too?
Is there a way I can calculate Impedance between any two given points on a trace for a given
frequency ? I want both the resistance and reactance to design matching .
If there is no such way in Altium(which I suspect), then how do people design Antennas these days ?
AI: how is that the impedance is not dependent on trace length and only on trace widths ?
This question has previously been asked and answered several times. For example:
How can PCB trace have 50 ohm impedance regardless of length and signal frequency?
Should each trace carrying RF be 50Ohm in characteristic impedance? How?
How is xΩ impedance cable defined?
The key point is you should not confuse characteristic impedance (the ratio of voltage to current for a propagating signal on a trace) with the impedance of a circuit branch (the ratio of voltage to current flowing through that branch).
how can Altium suggest a trace width without me specifying the frequency ?
The characteristic impedance depends on the balance between inductance and capacitance per unit length of trace. Therefore it doesn't change (to a first order approximation) at different frequencies.
Is there a way I can calculate Impedance between any two given points on a trace for a given frequency ?
The characteristic impedance of a trace is not the same as the impedance between two points on the trace. It is the ratio of voltage to current that is required for a signal to propagate along the trace.
If the trace is long enough that the charcteristic impedance matters, then you can't actually define the "impedance between any two points on a trace" because there will be a delay between when a current signal is applied at one point and when a voltage is developed at the other point. Further the voltage won't depend only on the positions on the trace but also on what else is connected to the trace ends.
Instead you can define a scattering matrix (or S-matrix or S parameters) for the piece of trace. Or you can define Z, Y, or ABCD parameters, which are all equivalent.
If there is no such way in Altium(which I suspect), then how do people design Antennas these days ?
In any case, you do not want to use the Altium estimate for the trace width to achieve a given impedance to design an antenna. Altium will assume you want to confine the signal to your trace (and ground plane), rather than cause radiation. If you are designing an antenna you want a radiated signal, not a confined signal.
You can use a tool like HFSS to design an antenna.
Or you can use various more ad hoc methods that are known to RF engineers (for example, you could simply design an antenna that you know is too long for your desired frequency, and trim it back until it resonates at the desired frequency)
|
H: Electrical work sign convention
If I understood the correct sign convention it's as shown in the below sketch
AI: Your sketch is right.
Positive charges do repel each other. So moving two positive charges closely together needs some positive work to overcome the repelling force.
A positive and a negative charge do attract each other. So moving them closely together you need negative work (gain of energy).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.