text stringlengths 83 79.5k |
|---|
H: Pass USB 3.0 connection through D-SUB connector
I need to pass a USB cable through a vacuum chamber wall, for which we have only D-SUB passthrough flanges available. So I cut a USB cable in half and soldered a D-SUB connector to each half. For USB 2.0 connections, this works without any issues, but I've had troubles getting a USB 3.0 connection to work.
Specifically, the computer emits the connect/disconnect sound repeatedly every few seconds when the cable is plugged in. The only workaround is to push the connector in slowly, until the device is recognised, essentially forcing a USB 2.0 connection.
I assume this is due to insufficient shielding to get a USB 3.0 link?
The individual connections seem to be fine, with <3Ω resistance for each one and no shorts. Below is a diagram of how I routed the cables through the connector:
As shown in the figure, the shield is connected to the shell of the connector to connect the shield on both sides together. I tried to keep the amount of destroyed shielding low, with around 3cm on either side removed.
What is the most likely cause for this failure, and how to avoid it in the future, if possible?
AI: I assume this is due to insufficient shielding to get a USB 3.0 link?
It's more likely that you simply introduce an impedance break so significant by separating the conductor pairs of the superspeed conductors that communication can't properly take place.
What is the most likely cause for this failure, and how to avoid it in the future, if possible?
As said, you can't just separate the conductors of USB3 arbitrarily: the signal is carried as electromagnetic field between the conductors; because the signal frequencies of USB3 are solidly within the microwave range, your splitting of conductor pair essentially means you break the transport of energy.
You probably won't be able to solve this situation using your current D-SUB connectors at all. You'll need to replace these connectors with ones that at least approximately retain the nominal 90Ω impedance of USB3 SS connector pairs. It's pretty likely the easiest way to achieve that is through USB3 connectors themselves. |
H: I2C setting MCU as both slave and master at the same time
Hello everyone I'm working on a new project with the TI MSP432P401R microcontroller. I want to have 2 separate I2C buses. I2C bus 1 will set the MCU as the master, whereas the DAC and the pressure sensor as slaves. Then I2C bus 2 will set the same MCU as the slave, and the SmartPort as the master (more specifically a device that we connect to the SmartPort will be the master). I'm not sure how can I make the MCU both a slave and a master at the same time. What I tried to do was, I defined 2 different eUSCI_b modules
void I2C_init(void)
{
/* Initialize USCI_B3 and I2C Master to communicate with slave devices*/
I2C_initMaster(EUSCI_B3_BASE, &i2cConfig);
/* Disable I2C module to make changes */
I2C_disableModule(EUSCI_B3_BASE);
/* Enable I2C Module to start operations */
I2C_enableModule(EUSCI_B3_BASE);
/* Initialize USCI_B3 and I2C Master to communicate with slave devices*/
I2C_initMaster(EUSCI_B0_BASE, &i2cConfig);
/* Disable I2C module to make changes */
I2C_disableModule(EUSCI_B0_BASE);
/* Enable I2C Module to start operations */
I2C_enableModule(EUSCI_B0_BASE);
return;
}
where i2cConfig is
const eUSCI_I2C_MasterConfig i2cConfig =
{
EUSCI_B_I2C_CLOCKSOURCE_SMCLK, // SMCLK Clock Source
48000000, // SMCLK = 48MHz
EUSCI_B_I2C_SET_DATA_RATE_400KBPS, // Desired I2C Clock of 100khz
0, // No byte counter threshold
EUSCI_B_I2C_NO_AUTO_STOP // No Autostop
};
and then tried to use this function for each module I defined to add slaves. But the thing is, the function only allows me to enter one slave address, but for my I2C bus1, I have 2 slaves: DAC and the pressure sensor.
extern void I2C_initSlave(uint32_t moduleInstance, uint_fast16_t slaveAddress,
uint_fast8_t slaveAddressOffset, uint32_t slaveOwnAddressEnable);
I would be very grateful if someone can help me with this problem or show me a better way to implement the same MCU as both a slave and a master at the same time. I'm new to programming with the I2C protocol and I'd appreciate any help. Thank you and have a great day.
AI: You are not thinking about the problem quite right. You are not setting the MCU to be a slave, you are setting one I2C interface module (one eUSCI) to be a master and another eUSCI to be a slave. The MSP432 itself is neither master nor slave, it just talks to the two eUSCI blocks.
There are two notions of slave address. One is the slave address that the master (you) uses to communicate with some other device, like a temperature sensor. That slave address is used just for the duration of the communication with the particular slave, and then is loaded with a different value when you want to talk to some other sensor or peripheral device. There is no need to store the addresses of all slave devices in eUSCI registers.
The other notion of a slave address is used when your eUSCI is acting as a slave device and some other gadget will be the master. In this case the slave address is more or less permanent and must be defined as part of the initialization, so that the eUSCI can recognize when the external master is sending a message to it.
So, you can treat the two eUSCI modules as being completely independent. Their SCL and SDA signals are independent, and the operation of the two I2C busses can be completely different. |
H: ADC Calibration - Vdd calculation
In documents and lots of online sources there is a formula as follows,
$$ Vdd = 3.3V * \frac{VREFINT_{CAL}}{VREFINT_{DATA}}$$
But I cannot get why CAL value is in the numerator and RAW data is in the denominator. I have this logic in my mind,
In the factory, producers used a preknown/fixed 3.3 V as Vdd and then they get this reference CAL value from ADC. But in reality, my Vdd may be different because of non-idealities. So I read Vref_int channel via my ADC and it gives me RAW. I get the CAL value by reading the device memory address provided by manufacturer. ADC reading and voltage level has a direct relationship. So, I conclude that;
3.3 V ------> CAL(from device memory)
x V ------> RAW(read from Vref_int)
Then, x * CAL = 3.3 * RAW
So, x = 3.3 * RAW/CAL should be my formula to get my current Vdd value so that then I know that a maximum ADC reading (4095 for 12 bits reading for ex.) corresponds to x volts.
However, in the above formula it states that x = 3.3 * CAL/RAW.
Please help me with that, thanks.
My board is STM32F4-DISC.
-------------------- edit ------------------------------------
With all my thanks to the ones who answered, I now understood(I hope) what is going on and want to explain here in detail.
First, what is what :
VREFINT_{CAL} or CAL : The ADC reading of the manufacturer from Vref_int when an exact 3.3V Vdda is used (unitless, 12 bits integer)
VREFINT_{DATA} or RAW : The ADC reading of the user from Vref_int when the Vdda is unknown (unitless, 12 bits integer)
V_{DDA} : Kind of the voltage base for our MCU, when you give a digital high output then it should be exactly equal to Vdda (unit = volts)
Now,
1- In the factory, the manifacturer gets the ADC conversion data from
the Vref_int channel and stores it into a memory location as a
12-bit integer. $$ \frac{{VREFINT_{voltage}}}{3.3(V_{DDA})}= \frac{{VREFINT_{CAL}(stored\,value\,,CAL)}}{4095} \quad (1)$$
The logic is simple here, 4095 for 3.3V and CAL for Vref_int.
2- When you want to calibrate your voltage level, since you don't know
your exact voltage base level, you read the Vref_int yourself and
get another ADC conversion value(RAW), let's call it $$VREFINT_{DATA}$$
Then $$ \frac{VREFINT_{voltage}}{V_{DDA}(unknown)}= \frac{{VREFINT_{DATA}(measurement\,of\,user\,,RAW)}}{4095} $$
Since $$VREFINT_{voltage}$$ is fixed, replace it with its equivalent from eqn. (1).
Then you can see that the equation yields to $$V_{DDA}(unknown) = \frac{3.3*CAL}{RAW}$$
Finally, you can use this Vdda value as your voltage reference(base) on your calculations.
This edit may include some mistakes, please let me know.
AI: ADC reading and voltage level has a direct relationship.
The ADC reading on the VREFINT channel and VDDA have inverse relationship.
An ADC reading is the ratio of the measured voltage to VDDA, scaled up to 4095 (the maximum value that can be expressed on 12 bits). E.g. if you applied VDDA/3 to an ADC input, the result would be 4095/3=1365.
So, the conversion result, ADCx = 4095*VADCINx/VDDA
You cant't measure VDDA directly, because if you apply a voltage equal to VDDA to an ADC channel, the conversion result is always 4095.
VREFINT is a fixed voltage generated inside the MCU, around 1.2 volts. When you convert the VREFINT channel of the ADC, you get the ratio of this voltage to VDDA, i.e. 4095*VREFINT/VDDA. Note that VDDA is now in the denominator. If the supply voltage goes down, the conversion result goes up, and vice versa.
Because VREFINT is not an exact voltage source, they measure it in the manufacturing process, using a good stabilized 3.3V supply, and write the conversion result (VREFINTCAL) in the system memory area.
Now if you'd read the VREFINT channel, and got the exact same value that's stored in the system memory, it would mean that VDDA=3.3V, like in the factory calibration process. This is also what the formula in the question gives you when VREFINTCAL = VREFINTDATA. |
H: Poles and Zeros of a Transfer Function
I am trying to understand the physical meaning of poles and zeros of a transfer function. Can someone help me how to understand the Figure 2b in the link below?
What does the height of the cone indicate? And what do the different color rings in the cone indicate?
The Link
TIA.
AI: can you explain in simpler terms with explanation, please
If you have a very simple low pass filter made from a resistor (R) and a capacitor (C), you can calculate the transfer function (TF) as being: -
$$\dfrac{1}{1+sCR}$$
Then, if you re-arranged that TF you could get: -
$$\dfrac{\frac{1}{CR}}{s + \frac{1}{CR}}$$
Now the important thing to realize is that if s = \$\frac{-1}{CR}\$ the whole TF has a value of infinity. This is the position of the pole.
Take a few minutes to think about that because it is fundamental to understanding how the bode plot and pole-zero diagram are related mathematically.
For this particular simple example, that pole sits purely on the real axis of the s-plane where \$\sigma = \frac{-1}{CR}\$. This isn't the vertical \$j\omega\$ axis. The \$j\omega\$ axis is where the bode plot exists.
Considering the pole; at any point distant from that pole, the amplitude is not infinity and, at any point along the \$j\omega\$ axis you can predict the amplitude of the bode plot by using the distance from the plot and taking the reciprocal. However, that reciprocal has to factor in the natural frequency of the circuit. For this simple circuit, the natural frequency is 1/CR.
So, using a simple example at the origin of the s-plane (0, 0), the TF amplitude is the reciprocal of 1/CR divided by the scaling factor (1/CR) and in this pretty trivial example this works out at 1.
So the TF has an amplitude of 1 at DC. I say "DC" because the value of the \$j\omega\$ axis is 0 at the origin and this means 0 Hz or "DC".
If you were to move up the \$j\omega\$ axis by an amount equal to 1/CR, the distance from the pole becomes \$\sqrt2\$ times bigger and hence the TF amplitude becomes \$\frac{1}{\sqrt2}\$. This is normally called the 3 dB point because, in decibels the amplitude has fallen by approximately 3 dB. It's also called the half power point.
What does the height of the cone indicate?
The height of the cone is infinity and, as such doesn't really tell you anything useful.
And what do the different color rings in the cone indicate?
The coloured rings are arbitrary and don't tell you anything useful. |
H: Voh value not provided
I have this EEPROM IC where I want to check the logic level compatibility with another IC.
But here Voh value is not given. I have observed this in my other IC also.
Why is the Voh value not provided? What shall I take as Voh?
Thanks
AI: This device uses an I2C interface so the SDA output is open-collector (or open-drain). They can only sink current to drive the SDA pin low; they cannot drive the pin high. So there is no specification for Voh or Ioh.
The SDA line is pulled high by an external pull-up resistor that you must add to the circuit. The value of Voh for SDA is determined by the value of the pull-up resistor and by sum of the leakage currents through all of the devices connected to the SDA signal. |
H: Headphone sensitivity/efficiency
Most commercial headphones are advertised to have sensitivity greater than 90 dB/mW. But to my understanding, even with 100% efficiency that's not possible (if we use 10 log (I/I0) where I = 1 mW, we get 90 dB). So, is this an exaggeration or am I missing something?
AI: You are mixing things up. When the specification says 90 dB per milli watt it is referring to a sound pressure level of 90 decibels being produced close to your ear and you cannot imply any measure of power efficiency at all. |
H: What does it imply when substrate terminal of Nmosfet is short-circuited with the source terminal?
What does it imply when substrate terminal of N-MOSFET is short-circuited with the source terminal?
AI: Since this has the appearance of a "homework" question, I'm not going to provide a direct answer. Instead, I offer the following hints:
What can you say about the silicon structure between the substrate and the source/drain terminals? What happens when you short one of them out? |
H: Why this H-bridge with only N-channel FETs didn't work?
Based on this answer here in StackExchange and the Motorola article it cites, I've designed a dual h-bridge motor driver with push-pull transistors to switch the FETs faster, layed out a PCB in KiCad and built it.
The restrictions were to use only N-Channel FETs found in an old computer motherboard, and I had no gate drivers (actually I've bought a couple IR2110s, but they will take a long time to arrive).
When I tested this first version with a toy RC helicopter motor, it ran for a couple times, but eventually half of each h-bridge (there were 2 of them) got shorted (lots of smoke and all). Everything ran on 2 li-ion cells (7.4V) for VCC and a third small phone cell (3.7V) in series for a 11.1V total in Vdrive.
I used the multimeter to check and looks like IMZ1A couldn't pull the high side FETs down for some reason. Initially they could, but then they stopped working. The 2N7002s were all fine. Tried replacing both the shorted power FETs and IMZ1As for new ones, but the same problem happened again.
Tired of burning stuff, but still trying to prove the point a simple N-channel only bridge could be designed, I made this new circuit:
This one actually worked and is working till now, despite being very simple. The question is: why the first circuit didn't work? Is there anything wrong with the design?
EDIT: Acquired some waveforms for the last circuit:
This one under no load condition
And this with the motor. Measured armature resistance was about 2.2R and inductance about 240uH.
AI: Both your circuits suffer from shoot-through issues. This is what happens when you do not give each transistor an independent control signal. The problem is may be worse in your top circuit since the high-side gate driver is much more effective than the low-side gate driver.
On the other hand, I don't know what R4 and R12 are supposed to be for since they just get in the way and could be making your high-side gate driver work worse than the lower one. Either, way the result is the same: worse shoot-through.
You stated in a now-deleted reply that R4/R12 was to limit current through Q3. This is not actually a problem because when Q3 is on, the current in of one branch is already limited by R3. In the other branch, the current that flows through the base-emitter of Q5B (which also flows through Q3) would blow out both Q5B and Q3...except it doesn't because Q5A is off. Get rid of them.
Finally, this is a much more fundamental issue that both your circuits suffer from which may require a re-design thus making all the previous points moot anyways: Your gate driver does not drive the high-side MOSFET gates with a voltage relative to the source. It drives them with a voltage relative to the ground, but the high-side MOSFETs cannot see the ground, nor do they care what it is. All they care about is the voltage between their gate-source terminals.
Think about it, suppose your gate driver turn on and current was flowing through the motor. The output node of the half-bridge connecting to the positive supply would rise in voltage towards Vcc. Thus, the source voltage for the high-side MOSFET would also rise. If your gate voltage is being referenced to ground, then your Vgs applied to the MOSFET rapidly decreases turning the MOSFET off (or worse, partially on). This is why you need floating gate drivers for the high-side MOSFETs and this is what fundamentally makes all N-channel H-bridges (and half-bridges) them more complex.
The simplest approach is to probably add a bootstrap diode and capacitor to your high-side diodes but this comes at the limitation of being unable to run 100% duty cycle. Every time a low-side MOSFET turns on, the bootstrap capacitor on that side will be connected to ground to allow it to charge up (refresh) through the diode. When the low-side MOSFET opens, the capacitor will float up to the high-side MOSFET's source voltage and the diode prevents the bootstrap capacitor from draining. Thus, the low-side MOSFET has to turn on periodically (usually through normal operation) to refresh the bootstrap capacitor which is acting as the floating supply for the high-side MOSFET gates. |
H: Photodiode to an ADC
Please bear with me as I'm not too experienced with this.
I'm attempting to use an ADC to get readings from a photodiode. From what I understand, I should likely use an op-amp as the range of the current that will be provided by the photodiode is low (0-150 uA), and I can output a voltage required by my ADC (0-3.3V).
To get an idea of what I should be looking for, I used the site here
The above is what I produced from that site. However, I noticed on the second op-amp's (LT6202) datasheet, is that is has a minimum output current of 30mA. I was unable to find any reference to a maximum analog input current on the ADC's datasheet, and I wonder if this safe as an input, or should I go about finding different op-amps?
The selected photodiode (TEMD6010FX01)
The selected ADC (MAX1249)
The first op-amp (LTC6240HV)
The second op-amp (LT6202)
AI: The max input current is most likely the mux leakage current.
By inspecting the equivalent input circuit, it is high impedance as there are two capacitors that capcititivly couple the input signals to the ADC core. These currents are likely to be negligible and the current that the LT6202 needs to source is 1uA which is well within its capability.
ADCs generally keep the input impedance high (input current low) because any load from an ADC will contribute to error when sampling\switching. |
H: Timer Protection Circuit
I need to create a circuit that sends out a signal only if the input it on for longer than a specified time. Lets say 20 seconds for this example. So if the input to the circuit was pulled high for 10 seconds no output would be sent, but if the input was pulled high for 21 seconds an output would be sent and it would be held until reboot or a reset button is pressed. I looked into 555 timer circuits, but I couldn't figure out how to achieve this without building a lot of discrete logic. I am also not allowed to use a microcontroller for this, but could potentially use an FPGA or PLC.
AI: You can use the NE555 as RS-flip-flop, so, without using the discharge function / "timing" function. The timing can be build without using the 555.
Noting that RESET can override TRIG, which can override THRES. you could make the following circuit.
R3 and R6 have been implemented as buttons, but can easily be adapted to the type of input signal you want. Their resistor value is R={if(time< T1 ,1G,if(time> T2 ,1G,1))}, so it's 1 Ω between time T1 and T2 and 1 GΩ elsewhere.
When the BUTTON signal is low, M3 stops conducting and therefore C2 isn't longer shorted to ground and starts being charged. When the charge time equals about 20 seconds, M1 will start conducting and will short TRIG of the 555 to ground. That causes the output of the 555 to be set.
OUT is tied to mosfet M2 that keeps the TRIG signal low(1).
As soon as the BUTTON signal is released, M3 shorts C2 again, and another 20 seconds are needed to turn on M1.
When RST of the 555 is pulled low, the 555 resets.
I'm not sure what the initial state of the 555 is, therefore I added pull-up resistor R2. C3 is not required.
(1) If M2 fails taking over fast enough, you could add a small capacitor between TRIG and GND. Because TRIG is pulled low by the capacitor at start up, you'll also need to add a capacitor between RST and ground with a bigger RC time.
EDIT
Triggering M1 at its treshold voltage isn't very accurate. You can increase accuracy in timing by replacing M1 by e.g. an open-drain comparator (with voltage reference like the TLV3011/2) |
H: How to increase pcb trace width after fabrication
I am new to pcb design and my first designed pcb is burning, I have checked some traces and I found this:
C18 capacitance is 0.1uF and 100V rated. The problem is that slim trace width is only 0.2mm. So maybe this might cause some burning. is there any way to fix this issue?
AI: With a 20C rise over ambient the vias could still support 0.9A. Which should be plenty for most microcontrollers.
The parasitic resistance of the via might pose a problem at 1.4nH and some ringing in conjunction with the caps.
The problem is if you are going over 1A, the vias might heat up. For traces over 1A the only solution would be to increase the amount of conductor.
The first thing I would do is make sure that it is the vias that are the problem, if you have a thermal camera or a temperature probe then I would take some measurements of the IC's and make sure they are not the cause of the burning.
Only the traces that carry more than 1A need to be increased.
If I were doing this with a 2-layer board, I would find a PCB drill (which can be had online for cheap). and drill out the vias that needed the extra current and solder a larger wire from the top to bottom layers.
If this were a 4-layer board I would not attempt to drill because of the potential for shorting on an inner layer. I would find a large gauge wire somewhere from 30 to 20 and solder them from the bottom to top around the outside edge of the board. This is not an elegant solution and could create noise problems |
H: Increase emf received from a coil without changing flux source
I have a situation where I'm trying to increase the amount of power received by a coil. To my understanding, I can increase the emf (voltage) by doing one of 4 things:
Increase the number of coils (\$ N \$)
Increase the field from the source by increase the current (\$B\$)
Increase the area of the coil (\$A\$)
Decrease the time it takes to change field (\$t\$)
Which is coming from the following equations below:
emf Equation:
$$\epsilon = -N\frac{\Delta \Phi}{\Delta t}$$
where
$$\Phi=BAcos(\theta)$$
\$A\$ is the area of the coil, and \$B\$ is the flux given by $$ B=\frac{\mu_0 I}{2\pi r} $$
so without chaning anything about the source of the field, the only way i can increase the emf of the coil is to change the number of coils, or change the area, right?
let's say my coil is a wrapping around the source of the field, with the source of the field running through the middle as shown below, and the coil is a square coil with equal height and width.
I'm wanting to know if I can do one of the following to increase the emf.
1) can I simply increase the area of the coil by increasing the height as shown below (note, i might do this if my width is limited but my height isn't)? I'm not sure if this will have any negative effects since it's not as uniform in all directions, and if there are negative effects, how big of a deal they are.
2) can i keep my square coil roughly the same shape width and height wise, but then wrap the that around my source in a helix form. if i wrapped my wire around a long thin rod, and then wrapped that rod around my source, I'd get something like below. My thought is that i would increase my area, and the number of turns, and that it might easier to make then simply increasing the height as seen in the previous diagram.
I feel like this helix method will have some downsides. I'm thinking that the field created by the coil with be fighting itself, thus reducing the over all emf... maybe this can be reduced by stretching the helix out... but by doing that i'll be increasing my \$\theta \$, which will reduce my \$ \Phi = BA cos(\theta) \$... maybe the increase in coils or area will overcome this decrease from the angle... I'm not sure. Is a helix coil even done?
I feel like could acheive the same thing with multiple coils in series like the diagram below. I think I would run into the the same issue of my fields fighting against each other, but my \$theta \$ would still be zero, so i wouldn't be losing any energy from my angle.
Do any of these methods sound like a valid way of increasing my emf? are they're better/easier ways of doing this apart from just increasing the number of turns in my coil, or increasing my coil in both width and height? Thanks for any feedback!
AI: Look at current transformers:
Source:
https://www.electronics-tutorials.ws/transformer/current-transformer.html
Remember that the magnetic field of a wire flows with the right hand rule with the magnetic field flowing in a circle around the wire:
Source: http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/magcur.html
Building a coil directly around a wire like the helix method will have some downsides, the magnetic field lines running through the coil will be perpendicular to the coil. In other words the coil won't pick up much magnetic field and the current will be close to zero.
So build a coil perpendicular to the field lines, like the current transformer. |
H: Requirements for PCIe Bifurcation on SBC
I'm looking for clarification on what actually needs to happen in order for a device to support PCIe bifurcation. Some forums say it's motherboard dependent, making me think it's not tied to the processor I choose, but all that is in the realm of PC hardware. My application is on an single board computer (SBC). To clarify, my application is splitting the x4 PCIe 2.1 Lanes from a RK3399 into 2 x2 lanes for 2 SATA adapters. As far as I'm aware, bifurcation is what I want, but even that I'm not completely sure of. Sorry if this is too broad, but nowhere online can I find the information I'm looking for.
AI: It’s entirely dependent on the root complex and its capabilities. Bifurcation is a fairly recent development only showing up in non-server x86 motherboards about 1-1/2 years ago (e.g., Intel x299 / AMD x399.) Server motherboards have supported it a bit longer using switches.
Not only that, even if the hardware supports it, the BIOS needs to support it, too.
So it’s a question for Rockchip. My guess is the RK3399 does not support it. (Followup: it doesn't.)
That said, if you’re just doing SATA then you don’t really need bifurcation, the PCIe-SATA bridge takes can take care of that (4-port PCIe - SATA adapters are common and cheap.) You would only need to bifurcate if you wanted direct tie to PCIe / NVMe. Or maybe I’m misunderstanding your question? |
H: How to power up a project with 3 devices, between 0 and 12 volts
I'm working on a project which involves a DC heating element, a computer fan and an Arduino/ESP8266. I'd like to use one power supply for those devices.....
The requirements:
The heater needs a steady 12V / 1A DC
The fan needs 0 - 12V to operate
The Arduino/ESP8266 needs 3.3 or 5 volts
Here in Belgium, we have AC 220V mains electricity. I'd like to use 1 power supply (for instance of a laptop)
Here is a schema of the project:
I have 2 questions:
How much Volts/Amps does my power supply need, to get those 3 things working, safely, without overheating?
How do I connect the devices to the power source so that the Arduino/ESP8266 isn't getting to much volts/amps when the heater is switched off or the fan is shut completely down?
Looking forward to some suggestions!
AI: Finding a laptop power supply that is dual secondary voltage might be hard. How about a small power supply from a desktop computer? They would have all the voltages you need and have great spike protection. I think a 50va, 50 watt power supply would be all you would need if you can get one that small. Good luck, have fun. |
H: What are some good replacement ICs for DIP UARTs?
Recently, I have been developing an SBC based on the Zilog Z80 microprocessor. Since my first design, a serial connection has been essential for the board to communicate with a PC or terminal. For this purpose, I have been using Zilog's own Z80 DART (or SIO/0) because of how easy it is to interface with the Z80. However, it has been getting harder and harder to find distributors who sell these chips or similar ones, such as the 6850, for a reasonable price. Is there a newer chip or technology that I could use that would come in a DIP package and is relatively easy to interface with the Z80?
AI: Newark/Farnell have 1,931 units of NXP's SC16C550 available. Stock up now!
Other possibilities include the SC16C2552 and SC16C752.
"But they aren't DIP!" you say. Well there is simple solution for that:-
SMT Breakout PCB for 48-QFN or 48-TQFP
I would just use the SMD part directly though - saves board space and is easier to solder than DIP. |
H: (THEORY QUESTION) If a diode was not placed in a buck convertor, when the switch is off would the load still be powered on?
(For Theoretical Purposes and Understanding assume the diode is not in the circuit)
I just have a bit of confusion regarding the buck convertor when the switch is off. Since an inductor cannot change instantaneously, when the switch is powered off, current will still flow towards the switch. Then a charge will build upon the switch terminal potentially causing a flashover etc (i get this is why we need the diode).
However as the current is still flowing through the load when off, it will be powered on still right?
AI: The catch diode insures a proper return path for the inductors current. Without the diode there is risk of damage to the MOSFET switch and a greatly reduced output.
There is also a possibility of incorrect polarity at the output, possibly causing damage or drawing excessive current from the source. The diode solves many problems on both the ON and OFF cycle of the MOSFET.
To answer your question,yes, the current would keep flowing from source to load if the MOSFET stayed in an ON state. However Vout would equal Vin with no voltage or current control provided by the SMPS IC.
If the diode is missing, there can be NO current flow if the MOSFET is OFF, as the path has been broken. The capacitor will quickly drain to zero volts.
If the diode is missing and the switch is OFF after being ON, then the stored inductors current will pass through the MOSFET switch, likely destroying it. MOSFET's fail as a short, so it would have to be replaced. |
H: DIY USB Power Hub
I'm looking to see if assembling my own USB hub is an option. I need to charge 20 devices via Micro USB at once. Data transfer is not needed.
Is anything stopping me from getting a 5V 20A power supply and wiring up 20 Micro USB cables in parallel? Does any protection circuitry need to be added?
Thanks!
AI: Is anything stopping me from getting a 5V 20A power supply and wiring up
20 Micro USB cables in parallel
Not really. If you want something simple, just put a 2-3 A resettable polyfuse on each VBUS wire on your board, and use good grade full-sized micro-B cables. The simplest way to provide your devices with up to 1.5 A, short D+ and D- wires together and leave them floating (in each cable). This connection will inform attached devices that the power source is "Dedicated Charging Port", aka DCP (which is an adoption of Chinese Federal Standard). Most devices understand this "charging signature" and will take about 1 A from the ports (up to 1.5). So you might need a 30-Amp power supply.
If your devices are not using "fast charging", you can leave D+ and D- unconnected, and devices will take no more than 500 mA (if they are good-behaving devices). |
H: Is it possible using two 3.3v LDOs from one SMPS AC-DC 5V to power 2 MCUs?
I'm planning to use two 3.3V LDOs to supply power for two MCUs, one is STM32, one is ESP-12. Input power of these two LDOs come from one SMPS module 5V, 3W (HLK-PM01). The reason for two seperate LDOs: I'm using STM32's ADC and I think separate power supply with ESP will make ADC more accurate. Another reason is there is a distance between HLK-PM01 and ESP, trace length almost 25mm. STM and ESP communicate through UART and both using same I2C F-Ram Memory IC, STM reads ADC, calculates something and save to Memory IC. Then ESP reads data from Memory IC and post to server. All same ground. LDO is RT9013 Datasheet.
So is it possible and working? Or it is a overkill?
Thanks very much
AI: Yes, it will work, as long as both devices share the same ground I don't see any reason why there should arise a problem. But I don't see the any reason to do it this way either.
It will be no problem to connect both MCUs to the same regulator. Just make sure to have good filtering for the analog supply of the STM. If you check the STM documents they explain very detailed what should be done to achieve clean input voltages for analog supply and reference: Put an RC-Filter in front of the input of the analog supply or even use a ferrite bead. This will attenuate switching noise from the MCUs and the SMPS.
● The VREF+ pin can be connected to the VDDA external power supply. If
a separate, external reference voltage is applied on VREF+, a 100 nF
and a 1 μF capacitors must be connected on this pin. In all cases,
VREF+ must be kept between 2.4 V and VDDA.
● Additional precautions can be taken to filter analog noise:
- VDDA can be connected to VDD through a ferrite bead.
- The VREF+ pin can be connected to VDDA through a resistor (typ. 47 Ω). |
H: What is best sources for learning and practising AVR microcontrollers?
I am computers and systems engineering student and I am very interested in microcontrollers so What is best sources for learning and practising AVR microcontrollers ?
AI: That certainly depends on your previous knowledge.
Previous experience with embedded programming:
Do you know some Assembler? Do you know some C? Have you experience with other controllers?
If all that is true for you I would suggest you to just take a look at the datasheet and start working on some small projects. Learning by doing is always great. But you should have already the mentioned basics.
Completely new to µC / Programming:
In that case the way to start (again) depends on you. If you want to become a computer and systems engineer, it really makes sense to know some assembler (Not because you will need it a lot, but because it gives you better understanding of the workings and limited resources of MCUs). There are great tutorials out there on the web and I can not really recommend one in peticular, just look for your self.
But learning assembler can be quite tiring, because progress is not really fast. So if you want more "motivation" it might be a better idea to get startet with some C programming on the AVR to get some small projects done and only then switch to assembler - you will only get to acknowledge the profits of high level lenguages even more ;)
Recommendation
There are a many many AVR tutorials out there, all written in different styles. Just pick one you like. There are forum questions about basically every mistake a beginner can make. AVR are nice to learn on, because you can google basically everthing that comes to mind.
And the best way to learn, is always trying and researching for your own. If you have any specific questions about some implementation you are working on - feel free to come here again and we will be happy to help. |
H: Does Impedance Matching Imply any Practical RF Transmitter Must Waste >=50% of Energy?
According to the maximum power transfer theorem, when a fixed source impedance is given, the load impedance must be chosen to match to the source impedance to achieve maximum power transfer.
On the other hand, if the source impedance is not out of reach from the designers, instead of matching the load to the source impedance, the source impedance can simply be minimized to achieve maximum efficiency and power transfer, it's a common practice in power supplies and audio-frequency amplifiers.
However, in RF circuits, to avoid signal-integrity issues, reflection loss, and damages to the high-power RF amplifier due to reflection, impedance matching must be used to match all the source impedance, load impedance, and also the characteristic impedance of the transmission line, and finally the antenna.
If my understanding is correct, a matched source and load (for example, a RF amplifier output and an antenna), forms a voltage divider, each receives half of the voltage. Given a fixed total impedance, it means there is always 50% of power wasted in burning and heating the RF transmitter itself.
So, is it correct to say impedance matching implies the efficiency of any practical RF transmitter cannot be greater than 50%? And any practical RF transmitter must waste at least 50% of energy?
AI: If your power source is a zero ohm output voltage source, followed by a 50 ohm resistor, then yes, what you think is correct.
However, practical RF amplifiers (at least ones designed to be efficient) are never built like that. They tend to have a low impedance common emitter or source stage followed by reactive impedance matching, all designed to operate into 50 ohms.
Interestingly, if you buy a general purpose signal generator, the output is usually built as a voltage source followed with a real 50 ohm resistor, as efficiency is not an issue, and having an accurately defined output impedance over a very wide frequency range is the main design goal. |
H: Impedance matching with complex impedance source
The method/steps to use a VNA to determine a matching network for a 50 Ohm (or whatever impedance) antenna seem mostly clear to me now. My current knowledge assumes though the transceiver (source) is basically the same impedance as the desired impedance (well, complex-conjugate) of the feed line + antenna. That's the whole point after all, to match them to the transceiver.
What I don't really understand, is how to tune the system when my transceiver is known to have a complex impedance other than the antenna (in my specific case the manufacturer specifies it to be 35+10j Ohms).
Say my source is Zs and my load is Zl. Is my goal then, when measuring the impedance using a VNA, to see Zl when my probe is placed at the antenna in, and to measure Zs when measured at the point of the transceiver? Should the transceiver be connected during these measurements at all? Should it be powered (but not sending)? How does this comply with the general advice of connecting the probe at the point of the matching network?
AI: This is not at all clear. To stay healthy radio transmitters need a certain load. If that load is also used as your receiving antenna, the receiver in the same box probably is designed to work well with just the same antenna. Do not expect you can measure the optimal antenna impedance by connecting a network analyzer to the antenna connector of your tranceiver box. The result will not tell anything for which load impedance the system has been designed. You must find it from the system specifications. Some radio tranceivers are their own network analyzers as well, they have antenna tuner and a special tuning mode. I guess this is not your case.
If it happens that the right load to be connected to tranceiver's antenna connector is (30+10j) Ohms you should connect the antenna+cable+possible tuning elements to your analyzer and tweak the connected system until your analyzer shows impedance = (30+10j) Ohms at the operating frequency. You use the analyzer in one port mode. |
H: What is a "mechanical pad"?
I've bought an FPC breakout stick from: Adafruit. In its datasheet, it mentions a "mechanical pad", which is the exposed copper bar near the connection pads. What does it do, and can I utilize it while soldering the FPC cable onto the board?
Here is the picture of the stick:
AI: The "mechanical pad" is where you attach the housing of the flex connector you are using.
The adapter board is not intended to have the flex soldered to it.
The idea is that you solder a proper connector for your flex on the adapter board. Then, you solder connections to the provided through hole points.
This lets you connect your flex to a circuit, then unplug and reconnect it without destroying your flex. |
H: Confusion with RPM calculation of a DC fan from its datasheet
I want to calculate pulse periods from a tacho output of a DC fan. I want to read pulses by a MCU and convert them to RPM and display in LCD.
But I'm confused with the following description of RPM conversion from the manual:
It seems they mean I should read two consecutive periods and obtain T0.
So N = |60/T0| rpm (?)
My confusions are:
1-) Why would I need two sum periods but not use a single one? Aren't they almost the same for a constant fan speed?(Because in coding it will be more difficult to implement to calculate two consecutive periods)
2-) Why are there 1/4 in both sides of equation? And is N = 60/T0 where T0 is the sum of two consecutive pulse periods?
AI: Reading between the lines of the specification, it sounds like the tacho output is the result of two 'flags' of something, perhaps magnets to a Hall sensor, or optically opaque material between a LED and photodiode. This results in two 'highs' and two 'lows' per revolution.
Manufacturing tolerances will mean that the length of one high is only approximately 1/4 of a revolution. If you timed the length of one high, and multiplied by 4, you would only get an approximation of one revolution period, not exactly. To get it exact, you would need to time over all four high and low output pulses.
It may be the case that an exact speed doesn't matter to you, in which case use one, or a high/low pair. It may be that you could time over one pulse, and correct it for your particular fan. However, if you were building several of these, then the correction could be different for each fan. Using all four pulses means that all fans would respond the same way, without correction.
It may be that you use a high/low pair, and improve resolution by averaging. However, due to manufacturing tolerances, it's likely that one high/low pair does not have the same duration as the other high/low pair. If your averaging was asynchronous, you'd get an unnecessarily noisy result if you sometimes averaged only the first pair, or only the second, or a mix.
What the specification is telling you is that only using a complete cycle of pulses will you get a result that's accurate every time. |
H: 100s of devices on SPI bus with daisy chaining - theoretically possible but who's done it?
I need to control 512 knobs digitally using 128 digital potentiometers. The AD5204 supports daisy-chaining but what do I need to be careful of when interfacing this with a ATmega328P?
I plan to split this across 8 chip select lines with 16 devices on each chip select (CS) line. It looks like the digital out is powered by the device so to me it looks like it will work.
What am I missing?
Will the atMega drive so many devices, if not how should I amplify the bus?
The CS will be held low for a command of 176 bit for the commands to get to the last in the chain.
Noise considerations - erm, low pass filters and load resisters?
AI: I think you're good to go. A few points I noticed tho'...
your SDO requires a pullup. So clock speed needs adjustment accordingly (see datasheet)
the 'mega SDO will be driving 8x chips, but I think it should be ok.
use standard best practices in terms of PS decoupling and layout.
datasheet mentions SPI 'compatible'. I could not quickly find where the differences are, other than non-8 bit codes. But since you are daisy chaining 16,it shouldn't be a problem.
Max speeds will depend on your trace lengths, so derate speeds accordingly.
You'll have to ENSURE exact number of bits/ clocks, otherwise your chain could get seriously messed up !
Cheers |
H: RC differentiator giving a higher output amplitude than input amplitude
This is the circuit:
simulate this circuit – Schematic created using CircuitLab
And this is the output:
I am unable to understand the increase in output amplitude as compared to the input's amplitude. Mathematically it makes sense - rate of change is quite high. But electronically I am having a hard time wrapping my head around it. How can additional voltage be generated with R and C?
AI: Think of a capacitor as "liking to keep the voltage across it constant" - at least in the short term.
Figure 1. Voltage difference analysis.
Just prior to the squarewave step down at (1) we can see that the right hand side of C1 is at 0 V so there is -1 V across the capacitor. Immediately after the step down there is still -1 V across the capacitor. Because the left side jumped from +1 to -1 the right side is "kicked" from 0 to -2 V.
We get a similar but opposite effect at (2).
In both cases capacitor voltage is maintained at 2 V for the instant of the square transition and is followed by the RC discharge. |
H: nodal analysis, current direction
Right, first the diagram, I tried using the schematic function,but it never actually added the schematic.
I picked this because it illustrates my confusion in the simplest schematic I could find.
We've got three nodes here, with the bottom (N3) being the ground node, current directions are provided for N1, so writing up KCL for the nodes give the following.
$$N1 :2mA - \frac{V1}{3000}\ -\frac{V1-V2}{6000}\ = 0$$ (current enters from the 2mA supply, leaves trough the rest of the legs)
$$N2 : \frac{V1-V2}{6000} - 4mA - \frac{V2}{12000} = 0$$ (current enters from the 6k resistor, leaves trough the rest of the legs)
This in turn gives out the correct node voltages for V1 and V2. (\$\frac{-12}{7}\$ for V1 and \$\frac{-120}{7}\$ for V2)
However, I am supposed to be able to arbitrarily define the direction of the current over the 12k ohm resistor, so if I decide that current flows into node 2 and instead and use the following N2 equation
$$N2 : \frac{V1-V2}{6000}\ - 4mA + \frac{V2}{12000}\ = 0$$
I get the wrong result for both V1 and V2, so clearly the direction of the current across the 12k ohm resistor matters, what am I doing wrong ?
AI: Indeed you can define the current direction in which ever way you want. If you want to get the current going into the node N2, it should be \$\frac{0-v_2}{12k} = -\frac{v_2}{12k}\$.
If you add this current, then you get the same equation:
$$\frac{(v1-v2)}{6000}\ - 4mA + (\frac{-v2}{12000})\ = 0$$ |
H: Problem with NAND gate in proteus
I was doing a simple logic circuit in proteus when i realised that something was going wrong.
It looks like the NAND gate provides high voltage to its inputs from nowhere. No errors provided, but if i let the simulation running for a while, it always end with an unexpected close.
You can see that in the picture that i'm getting 4 volts from the inputs of a non-connected nand gate, and you can also see that if you connect the terminal to the ground(the ones on the top of the image), it seems to work as expected.
Should the nand gate have +4V on their input terminals when they are not connected? I think that it will only should have high state on the output terminal if the inputs are not connected or connected to the ground.
I'm newbie, its never late to learn. Pls help :D
AI: The 74S37 is a TTL logic IC. If you leave the inputs floating, then they will usually be seen as a logic 1, or the input will 'float' in the unknown region, where it could be a 1 or a 0.
Proteus will use the red or blue logic indicators to show whether something is at a logic '1' (red) or logic '0' (blue). If you were to remove the multimeter, you will notice that the input logic indicators will go grey. This means they are undefined, although most TTL will tend to go high.
For a NAND gate, use pull-up resistors on the inputs to tie them high, then you can drive the input low with your control signal. More on using resistors on logic gates can be found HERE.
This is what I mean with proteus. You can see by leaving the input floating, they will go to an undefined state, which Proteus will mark as a grey logic indicator:
The voltmeter in Proteus has a modelled 10M(?) resistance which will allow the program to then register it as a logic 1, and the logic indicator will go red. |
H: BQ2057WSN Charging issues
I have designed a battery charging Module using BQ2057WSN IC to charge a 7.4V,6600mAH Li-ion battery. According to my calculation the charging current is set at 625mA.
The input power source to the Charging module (DC+) is 12VDC, 2A SMPS and the circuit is drawing 625mA from the Power source (SMPS). Now, I have connect a multi meter in series with the battery pack as shown in picture to see how much current the battery is drawing. I have found the multi meter reading as 250mA. The measure battery voltage is 7V, which means the battery is not fully charged.
So, my question is, if the charging circuit is drawing 625mA from the power source, why only 250mA current is drawing by the battery while charging? The battery should also draw the same amount of current (625mA), isn't it?
AI: Summary:
Extra voltage drop in the battery circuit has significant effect on charging as it makes the battery voltage appear higher than it is. Extra voltage drop in the input circuit is allowed for by reducing the voltage drop in Q1 by an equal amount.
Detail:
If the meter has significant resistance on its current range, when the meter is in series with the battery the voltage drop across the meter makes the battery voltage appear higher than it is. If the battery is close to full voltage then the IR drop across the meter can make the battery appear to be fully charged. The charger reduces the current accordingly, the IR drop lessens and Ichg again increases. A balance point will be reached where Vbattery + IR "fool" the charger into maintaining a lower than correct current.
When Vin is 12V and Vbattery is <= 7.4 V the charger drops the surplus voltage - mainly across Q1. If the meter is connected in series and drops some voltage then the voltage across Q1 is adjusted by the control circuitry - so the meter resistance will have minimal effect. |
H: Spectral lines vs Envelope of PWM wave
On the wiki page of 'Duty Cycle' there is a nice animation that shows the frequency spectrum of PWM wave as its duty cycle is modulated from 0 to 100%.
The animation shows the 'spectral lines' as well as the 'envelope' of the spectrum.
What I understand is that the 'spectral lines' are the individual frequency components starting from its DC value, and 1st 2nd 3rd.. harmonic components.
What I don't understand is the meaning of 'envelope'? What does that signify practically? are there infinitely many sub-harmonic components
of the waveform whose limiting values gives the 'envelope' of the spectrum or is it just the mathematical function that join together all the harmonic components?
AI: In that animation, envelope refers to the continuous line that connects the discrete frequency points. It need not have a closed-form function, but in this case it does, related to the absolute value of the sinc function, \$\left|{\frac{sin\left({\pi x}\right)}{\pi x}}\right|\$ |
H: Async SRAM Chip. Write Cycle. Data inputs timings
I'm working on my home project of building the 8-bit computer and now I'm on the RAM building stage.
I'm going to use this chip for as my RAM UTRON UT62256C (datasheet). This chip is to be connected to the common bus, to which 8-bit registers are connected through 3-state buffers.
My question is about write cycle timings (see the diagram below)
The doc says that after !WE goes down there is a period of time twhz when signals on I/O pins must be not applied: "4.During this period, I/O pins are in the output state, and input signals must not be applied. "
Question #1: Does that mean that I CAN NOT set-up the data upfront and then just put the !WE low for some period of time? If I do so, will it damage the chip?
Having said that !CE is to be always LOW, I assumed that my write cycle could look like this:
Set up the address using
Set up the data on the bus
Make !WE low for certain period of time.
As it's an educational project I wanted write cycle to be like this because it simplifies design for manual mode of putting data to the SRAM. I was going to use dip switchers to set up the data and the address and one button to move !WE from High to Low maybe with 555 timer to set-up the pulse length.
Question #2: If I must not put the data on I/O pins before twhz, I was thinking about could you please advice some circuit that will allow to do manual programming with dip switchers
I was thinking about adding 3-state buffer between I/O pins and the dip switchers and some circuit that generates two pulses with the right timings triggered by pushing the button:
First !WE goes down for certain period of time for example 555 timer could be used.
!WE pulse triggers the second 555 timer that generates !OE pulse on the 3 state buffer.
See the picture below. Looks like overcomplicated thing :)
Thanks for you replies in advance.
AI: The doc says that after !WE goes down there is a period of time twhz when signals on I/O pins must be not applied: "4.During this period, I/O pins are in the output state, and input signals must not be applied. "
This is a poorly written data sheet. While the concern mentioned can exist, the previous note #3 does a poor job of expressing the reality that the problem only exists if the /OE is held low throughout a write, which would not be a common or generally wise practice.
Typically, in a processor bus of this era, the chip select /CS would be driven active low by an address decoder subject to contamination and glitching during the change of the address lines from one value to another. As a result, a /CS timed write is not generally used. Nor is it a good idea to have a memory chip driving the data bus surrounding a write cycle, so /OE would not be tied low and the quoted concern would not exist.
Instead, the typical approach is to have the bus master drive the /OE line only if it wished to perform a read operation, and to drive a properly timed strobing of the /WR line at the proper time in a write cycle.
By using /OE to control the output buffers, and using /WR to the time the writes, you should be able to avoid conflict and be able to safely drive the data lines during all periods of time except those surrounding assertion of the active low /OE signal. Of course, to read from the memory, you will need a way to disable whatever you use to drive the data lines during a write, and to make sure that those drivers are disabled for a safe window around the assertion of /OE.
You are always able to drive the address lines, but their values only matter at the times required before/during/after a write or read cycle.
A simple way to construct a deterministic memory interface may be to create a finite state machine and include the clock low period as a factor in the generation of /OE, its complement to the write data drivers, and /WR. Keep in mind that a processor itself tends to be a sort of state (or else microprogrammed) machine - with instructions needing multiple clocks (or at least clock phases) to complete. You really only get to one instruction per clock with heavy pipelining and when operating from distinct internal code and data caches. |
H: Quiescent current vs GND pin current for Linear Regulator IC power dissipation
I want to confirm that, for a linear regulator IC, quiescent current multiplied by the input voltage accounts for the IC power dissipation(including drop-out voltage loss).
I ask because the datasheets for some of the linear regulators say that this loss is calculated by the GND pin current multiplied by the input voltage. This is found in the thermal considerations section.
While I initial thought of them as the same thing, the datasheets have different values and curves for each.
I'm doing a stress analysis on the linear regulator and want the total power loss of the IC, so which current is definitively used for the IC device dissipation; quiescent current or GND pin current?
Note: I've already calculated the power dissipation from the drop out voltage. I just want the device dissipation.
Some datasheets I was referring to:
https://www.analog.com/media/en/technical-documentation/data-sheets/3012fd.pdf
https://www.analog.com/media/en/technical-documentation/data-sheets/1761sff.pdf
AI: Quiescent current is usually the current measured through the Vin pin. Sometimes it is given for both shutdown and normal operation so look for the testing conditions in the datasheet. Quiescent current, when specified for normal operation is usually the number used to find IC power/loss, and usually measured under no load (check the conditions in the datasheet for which it was measured.)
However (you already know this but I'll include it for other readers), Quiescent current will not give you the total loss of the voltage regulator, this is found by taking the $$(Vin-Vreg)*(Total \ current) = Ploss$$The total current includes the load and the quiescent current, the quiescent current is usually much lower than the load current (but it depends on your load).
Max device dissipation includes the power dissipated from the quiescent current, and the loss from the voltage drop. Usually quiescent current on modern regulators is so low (in the uA's) it doesn't contribute a lot of power loss to the total calculation.
Ground pin current is usually the current measured through the ground pin when the chip is in normal operation. I would use the quiescent current to find the IC loss.
The easiest way would be to simulate it in a spice package under all loads. In Ltspice you can do this by alt clicking a component (like a voltage regulator). |
H: Changing output voltage of a buck converter by electronically swapping the feedback resistors
I am going to design a circuit which is going to output some digital signals. The output stage of the circuit will have some level shifters. The level shifters will give flexibility to the user to choose the voltage level (1.8V, 3.3V or 5V) of the output signals. I am planning to place a three-position slide switch on the PCB. According to the position of the switch, one of mosfets will be shorted, and so, corresponding feedback branch will activate and set the output voltage.
Can you please review this idea? I am not sure if the feedback signal will be distorted or not. I think the switching frequency of the buck converter to be around 500kHz. Is this idea feasible? Can you please make some suggestions to improve it, or tell me why it wouldn't work?
EDIT: The switch will be a panel mount one and will stay in a remote location. That is why I am using the mosfets to shorten the feedback paths.
AI: Two things. First, use two FETs and two wires to control them instead of 3, and second, use a resistor-ladder approach.
You have three low-side resistors, R2a,b,c. Wire them in series, then use the two FETs to short intermediate connections to ground. This is fail-safe and is simple to control directly with an on-off-on SPDT switch.
simulate this circuit – Schematic created using CircuitLab
Exercise for the student: calculate the values for R1 and R2a/b/c based on the Vfb of your regulator. Here's what I came up with for Vfb=0.6V (resistors in kOhms):
How it works:
When M2 and M1 are off, Vout is 1.8V as set by Vref*(1 + R1/(R2a+R2b+R2c))
When M1 is on, Vout is 3.3V as set by Vref*(1 + R1/(R2a+R2b))
When M2 is on, Vout is 5.0V as set by Vref*(1 + R1/R2a)
Note that it is impossible to set any higher voltage than 5V by accident. If both M1 and M2 are on, you still get only 5V because R2b is already grounded by M2. Like I said, fail-safe.
As far as what it does to your regulator, changing voltage will change the stepping ratio. Make sure that the regulator can work at all three settings, and that the inductor is appropriately sized to give acceptable ripple current. |
H: Best way to connect balanced audio line to ADC (with volume pot)?
Sorry for this more longish question: I'm currently collecting some ideas for my next project where I want to connect an external balanced audio line level signal to an ADC with differential inputs (like a PCM1804, not specifically though). I'd also like to have a volume pot to boost the signal if necessary.
I've looked accross the internet and found this circuit (an amplifier for a balanced line):
https://sound-au.com/articles/balanced-io.htm
Now this has two problems: First, it has fixed gain (I'd like to adjust that with a pot). If I understood correctly I'd have to change 4 resistors with one pot at the same time (I know of stereo pots but not quad pots) and as mentioned on the website the resistors would have to be close tolerance (which I'd guess is pretty difficult on something like stereo pots). Second, it doesn't output a DC offset which is required by the ADC.
From what I've found out the DC offset can be worked around by using an off the shelve solution like the OPA1632 which allows me to supply the DC offset voltage. This however also has two problems: First, I still don't have a volume pot in the circuit. Second the OPA1632 is relatively expensive at around $5 (am I too picky?).
One solution I could image is to convert my balanced input signal to an unbalanced signal, do the amplification with a pot, and then convert it back to balanced and then bias both outputs to my ADC's virtual ground. For the last step I'm not even sure if that's as trivial as connecting a capacitor in series of both lines and connect their outputs over a resistor to the bias voltage.
As for an attempt like above: Is converting a balanced signal to unbalanced back to balanced a silly thing? At that point I wouldn't even know why my ADC has balanced inputs in the first place. Is it common to have balanced inputs in audio gear but the actual "processing" is done with single ended circuits? I know balanced connections are useful for suppressing noise pickup on long wires (which I guess isn't such a big problem on a PCB).
I could also avoid doing the analog amplification altogether and do the amplification in the digital domain. However, if I we're about to set the gain to something like +40dB I'd obviously also boost my ADCs noise floor by that amount. Does it even make sense to avoid doing this digital amplification? Am I just going to amplify "analog" noise instead (I don't have that much experience with those kind of things)?
Does anyone have any idea which way I should go here? I can certainly understand good quality can cost money but if I can get 10x NE5332's or 10x TL072's for a single OPA1632 I'd like to avoid going for the more expensive part.
AI: Does anyone have any idea which way I should go here?
Consider initially the 3 op-amp amp Instrumentation Amplifier (InAmp): -
Gain is set by one resistor (\$R_{gain}\$) and, if you throw away the final stage op-amp circuit and take your differential output directly from the two op-amps to the left you should get what you want. You do need to spend a little money making both R1's matched but maybe 0.1% devices at 50 cents isn't a biggy.
Differential leakage capacitances either side of \$R_{gain}\$ should ideally be zero for a perfect match but if you are only dealing with audio it won't be much of a problem. |
H: Can I combine water turbine without power loss?
I have this small 12V water turbine generator.
I connected one of these in a pipe running from a container on my roof to the ground. (height ~6m) and reached ~9V.
If I buy 6 and put them one after the other along the same pipe, will I have exactly the same amount of energy (force/pressure) but multiplied by 6 ?
I don't think it will slow down the flow of water, or does it ?
AI: you're using a turbine generator to convert kinetic energy (water flow) to electrical (the 9V that you're seeing). This will cause a pressure drop, even if you just have one in line with your flow. If you have 6 of them, each will cause a pressure drop, and likely, each will produce a little less power than the one above. it.
how much that pressure drop is will depend on your generator. Each turbine might output roughly the same amount of power and it might seem like the pressure is unaffected, but there there will be some pressure drop by each turbine. |
H: INA168 vs INA169 High-side current sensing
I have read the datasheets of the INA168 and the INA169 and can't see any difference, other than internal resistors value and quiscent current (25uA for INA168, 60uA for INA169). Those two were even made in pretty much the same period. Does anyone know if there is any differnce and maybe which one is better?
Or maybe I should go with something different for high-side current measurement on a 40V 5A Lab PSU, since there are things like LTC6101 or MAX4080 ?
AI: The value of those input resistors is a factor in determining the gain. If you need high gain, choose the INA169. Lower gain, choose INA168.
From the INA169 datasheet:
The higher gain may allow you to use a lower RL, decreasing the output impedance of your sensing circuit.
As you noticed, the trade-off is higher quiescent current. |
H: Suitable protection for elements on power line
I am working on mini project with 555 timer. The LED diode is not a diode, but a 12V LED string. What happens if there is a sudden drop in voltage?
Motor will induce voltage of opposite polarity. The diode in darlingon transistor will bypass that voltage to the power line away from the IC. Will my diode on LED ring be ok or will be dead?
AI: Depends on what the voltage source is. If the voltage source can't source enough current, when the motor is turned on, the LED's will dim or switch off. To prevent this put a bypass capacitor on the Vcc to ground.
The motor needs a flyback diode on it to prevent inductive kickback. Because the darlingon pair has a bypass diode, any overvoltage produced by the motor will be put back to Vcc.
The diodes should look like one of these configurations: |
H: Converting Open-Drain to Active-High (low voltage, low power)
In my design, I need an RTC (PCF85263A) to enable a boost converter (TPS61021A) after the RTC alarm triggers an interrupt. The interrupt pin on the RTC (INTA) is open-drain, but the enable pin on the boost converter (EN) is active-high. This is a problem.
When the alarm triggers the interrupt on the RTC, the INTA line is held at 0V, and when the interrupt isn't active, INTA is high-impedance. However, I need it the other way around: the EN pin needs to be held up to battery voltage when the alarm interrupt is active.
My first guess is to use an N-channel MOSFET logic inverter to achieve this. The input, of course, would have a pull-up resistor to VCC.
simulate this circuit – Schematic created using CircuitLab
However, the battery voltage VCC can go as low as 0.9V in this design. Another important requirement is that the power consumption be as low as possible while the RTC is "sleeping" - before the alarm triggers an interrupt. The RTC uses 300nA and the boost converter has a shutdown current of 500nA, so my power budget can take about 1-2μA more during sleep.
As such, I am not sure if this MOSFET inverter will be the most power-friendly option. Additionally, a MOSFET might not be the right choice for such low voltage - I assume that I must browse for VGS/VCE/VSAT choices at or below 0.9V.
What would be a power-friendly transistor choice to convert an open-drain, active-low output signal into an active-high signal for an input pin?
My design originally used the TPL5111 power gating timer instead of an RTC. It has a proper push-pull, active-high enable pin to wake the boost converter. Unfortunately, it can only "go to sleep" for up to 2 hours at a time. I thus want to migrate to an RTC so that my device can sleep for days or months instead; having the device wake up is a power-costly operation.
Edit: This PNP solution forgoes a pull-up at INTA. Credit to Tim Wescott.
simulate this circuit
AI: The circuit simulator isn't working for me today (grr).
PNP. Emitter to +V, base to RTC through a resistor (100k\$\Omega\$?). Collector to boost converter, with resistor to ground (same as base resistor). Size the resistor for a happy medium between turning on & off reliably, and not consuming too much current. It should consume very little current when off. |
H: Single GPIO for optocoupled input and output
I want to use a single GPIO pin as optocoupled input and output (at random discretion).
Will the following scheme work for any possible combination of input and output?
Especially when optocoupler 1 is open but GPIO output set to low?
AI: This could work, but whenever the input optocoupler is turned on, the output optocoupler will also turn on (assuming that the current from the first is sufficient to turn the other optocoupler on. Even if the current is not high enough, the LED of the second will be 'slightly' on, which will allow current through the other side of the optocoupler, the CPC1018N only needs ~0.25mA of LED current to turn on.)
It's not worth it to gain an extra gpio. If you need an extra gpio there are better ways to get one than trying to worry about current through an optoisolator. |
H: Through hole via isolation in Eagle CAD 4 layers board
In Eagle CAD, in a 4 layers board (signal-ground-power-signal) how do I avoid that a through hole via from, for example, the top signal layer to the power layer is electrically connected to the layer in between (ground)?
Do I need to manually move the ground polygon or overlay a "cutout" polygon or can it be done automatically? Can Eagle CAD place a cutout or a no-copper space around the Via on the in-between layer?
Thanks
AI: You don't need to worry about it since your netlist will take care of that. If your via passes through a plane or copper pour that is not in the same net, a cutout will be made around the hole in the plane so the via does not make contact. These cutouts might not appear until you actually connect your polygon to a net.
It's almost the exact same thing as routing a trace through a polygon or copper pour that is connected to a different net. Cutouts will be made in the polygon around the trace.
It should take ZERO additional steps on your part. Just route as normal.
Can you imagine having to deal with that manually on a board with hundreds or thousands of vias? And with modifications that could occur? Any software that did not automate that would immediately die even if it was free. |
H: Stable LDO voltage when Vin goes down momentarily
I have an application where Vin comes from a battery pack (3V - 7V) but on occasion and under heavy load, Vin drops to just above 2V for ~ 1 seconds before recovering.
There is a 3.3V LDO that provides power to the microcontroller. The LDO uses a pass transistor to regulate the voltage, and due to the body diode, current can travel from output to input when Vout > Vin + 0.2
To prevent the microcontroller from resetting, my intent was to add a capacitor large enough to keep the microcontroller alive until the battery voltage recovered.
However the body diode would conduct at a certain point, and the energy that would have gone to the microcontroller, would now be going to the LDO as well. I am not sure how much energy would be lost here.
This both an energy and cost sensitive project.
What can possible by done other than throwing a diode at the output of the LDO (Would like to minimize that energy loss) ?
AI: There is. A regular lone PMOS for reverse polarity won't work in this case since it doesn't guard against the load being able to push current in reverse towards the source. There is a circuit on here that modifies it so that it does:
Understanding an 'ideal' diode made from a p-channel MOSFET and PNP transistors |
H: Relationship between AWVALID and WVALID in AXI4-Lite interface
I am implementing an AXI4-Lite slave interface in FPGA and I want to have the read/write operations to complete in as few clock cycles as possible. With that in mind, can I assume any specific relationship between AWVALID and WVALID coming from the master? Such as AWVALID and WVALID will always be asserted together or one would come certain number of clock cycles after the other?
PS: The master interface I'm working with is the Xilinx DMA/Bridge Subsystem for PCIe.
AI: AWVALID and WVALID have no specific timing relationship to each other.
AWVALID is asserted at the beginning of a transfer to qualify the address and other AW bus info.
WVALID is for W bus and only qualifies WDATA/WSTRB/etc. It can be asserted at the same time as AWVALID but you can't count on that.
In any case, you cannot assert BVALID until both AWVALID/AWREADY and WVALID/WREADY have occurred.
That said, if you are looking for higher performance why not just use AXI4, which supports burst transfers and wider bus widths? Especially if you’re using xdma (DMA/Bridge subsystsem, aka. PG195.) AXI4-Lite is designed for ease of use for control registers and such, not for ultimate performance. |
H: How do I remove this power supply noise?
Above is the circuit diagram of my power supply. It converts input 12V DC into 5V DC and then 3.3V DC with linear regulator. I have highlighted it in red color in the block diagram.
So current situation is, whatever noise is there on 12V input voltage shows up on 5V and 3.3V. how do I filter this? How do I improve my design to get clean 5V and 3.3V even with a noisy 12V input.
If I use clean power supply 12v, I am getting best results. But i can't give 12V clean power supply to everyone. Customers can pick up any noisy power supply and it should work with my design.
Below I am attaching two waveforms , Yellow is 12V , Blue is 5V and Pink is 3.3V.
this is zoomed out view
Zoomed in view
If you look at the waveforms, the noise appearing on 12V is carried on to the 3.3V. I was assuming LDO will at least attenuate it more but, it is very high. On the PCB the ground is properly stitched with multiple vias from top to bottom.
How do I get rid of this, what do I have to change or add in my design to improve it?
Note: Input 12V Dc power supply used is a 220V AC to 12V DC adapter.
AI: It seems to happen at harmonics of the line rate. This is showing up as common-mode noise on everything, and because it's AC potential difference between your nominally-grounded body and its local reference it's bound to cause issues with your sensor when your body is nearby.
Where is it coming from? Leakage (parasitic coupling) between the 12v DC-DC primary to the isolated secondary. All line-power supplies have it, good ones manage it better.
As an experiment, try grounding the board to safety (earth) ground to see if the noise goes away. If it does, congratulations, you have leakage. Add a common-mode filter to the 12V or get a better supply.
Another experiment: add a common-mode filter to the line-in on the 12V supply. This would break the AC loop for noise and contain it in the power supply where it belongs. |
H: Flashing firmware onto NXP QN9020 via SEGGER J-Link
Here I am attempting to flash the firmware using SEGGER J-Flash Lite.
Below is the ouput when attempting to connect via the SEGGER J-Link Commander.
JLinkExe
SEGGER J-Link Commander V6.46j (Compiled Jul 12 2019 17:31:38)
DLL version V6.46j, compiled Jul 12 2019 17:31:27
Connecting to J-Link via USB...O.K.
Firmware: J-Link EDU Mini V1 compiled Jul 10 2019 16:32:48
Hardware version: V1.00
S/N: 801010920
License(s): FlashBP, GDB
VTref=0.000V
Type "connect" to establish a target connection, '?' for help
J-Link>connect
Please specify device / core. <Default>: QN9020
Type '?' for selection dialog
Device>
Please specify target interface:
J) JTAG (Default)
S) SWD
T) cJTAG
TIF>
Device position in JTAG chain (IRPre,DRPre) <Default>: -1,-1 => Auto-detect
JTAGConf>
Specify target interface speed [kHz]. <Default>: 4000 kHz
Speed>
Device "QN9020" selected.
Connecting to target via JTAG
Cannot connect to target.
J-Link>connect
Device "QN9020" selected.
Connecting to target via JTAG
Could not measure total IR len. TDO is constant high.
Could not measure total IR len. TDO is constant high.
Could not measure total IR len. TDO is constant high.
Could not measure total IR len. TDO is constant high.
Cannot connect to target.
Connecting to target via JTAG
TotalIRLen = ?, IRPrint = 0x..000000000000000000000000
TotalIRLen = ?, IRPrint = 0x..000000000000000000000000
TotalIRLen = ?, IRPrint = 0x..000000000000000000000000
TotalIRLen = ?, IRPrint = 0x..000000000000000000000000
Cannot connect to target.
See also:
Freescale Kinetis KE - writing to flash
AI: Your debugger is not properly connected. Telltale sign:
VTref=0.000V
Modern debuggers usually allow a wide target voltage range. Their level shifters will be set to the target voltage VTref.
But that also means there will be no output if VTref is zero volts.
Note: The above J-Link EDU Mini does not have level shifters, and is thus unsafe to use for any other target voltage than about 3.3 Volts. But it can still measure VTref, and needs this pin properly connected. |
H: How to see Signal frequency and Sampling frequency on the frequency spectrum
If I have a 100 Hz periodic analog signal and let us say it is band limited to 1kHz and I sample it at 3 kHz. If I want to see its frequency spectrum then how to differentiate between the signal frequency 100 Hz from the sampling frequency 3 kHz in the spectrum?
Edit:
What I understand is that frequency spectrum shows the frequency components at even or odd or mix multiples of a fundamental frequency. But what I don't understand is that whether the frequency is the periodic signal frequency or the sampling frequency?
If it is the harmonics of the fundamental signal frequency then how to find out the sampling frequency from looking at the spectrum?
AI: If you sample a 100Hz signal at 3kHz, then here's what you will see:
Time domain: Think of this as a digital oscilloscope view. You will have a series of numbers that represent the voltage. These numbers are spaced at 1/3000 second along the time axis. Basically, a bunch of dots on an xy coordinate system. X is time, and all of your dots are 1/3000 of a second long. Y is voltage, and your dots are at the height indicated by the voltage.
Frequency domain: You take your digitized data, and apply a fourier transformation to it. You now have a bunch of dots which represent the intensity of each frequency in the signal. The frequencies range from 0 to 1500Hz. Refer to the Nyquist/Shannon theorem for the reasoning. In your (idealized case) you will see a single peak representing 100Hz.
The sampling rate is not part of the spectrum. You'll never see it.
The sampling rate is also not part of the digitized data - you would never see it in an oscilloscope view of your data.
Really, you need to look up sampling theory. The wikipedia link is a starting point. If you understand what is going on, then you would never wonder how to seperate signal and sampling rate - you would know that they are already seperate. |
H: Relay control by using microcontroller
I'm working on a I/O module to control AC, DC motor and lamps by using relays. But I don't know how to control the relays. Some says I should use optocoupler, some says transistor array.
I found an optocoupler TLP280, in the figure there is the schematic. Can I use it by supplying 3.3V to control 24V dc motor and some lambs.
AI: Relay contacts create quite a bit of noise when they switch, particularly if the load has a lot of inductance (such as a motor, or even because of long wires that are not close to each other), so the opto-isolator can be a good idea, because it prevents the noise from being coupled back to the ground of the power supply used for your logic. If you use a transistor array (and a catch diode across the relay coil) you will probably have no trouble at all driving the relay coil, but you may have issues when the loads are connected.
For this to be valuable, the relay supply should be isolated from the logic supply, say another 12V supply.
You will need a series resistor to control the LED current (your optocoupler has AC input capability, so one of the LEDs will be unused). The CTR is as low as 50% depending on rank, so if you drive it with 5mA the output current might only be 2.5mA (allow perhaps 1mA to allow for temperature and aging effects) so you would need some kind of additional driver for most relays. Suppose you follow the optocoupler with a ULN2003 darlington array, then you can switch substantial relays, and the catch diodes are included. |
H: Why are MEMS in QFN packages?
I've seen that many (if not all of them) MEMS, like accelerometers, they are only available in QFN packages.
Why is that?
AI: Simply because: what else would you want to use?
Chip-scale packages are typically out: MEMS devices typically need reliable enclosure; a layer of lacquer typically doesn't work.
flip-chip BGA plastic packages minimize area of contacting (if you can't do chip-scale), but that's often not technically viable due to the MEMS structures being exactly where the balls would end up
QFN is small enough for most applications, and cheap.
Anything larger is undesirable because a) larger than necessary without being any easier to work with and b) more expensive, even if only for economics of scale: the very vast majority of MEMS applications will prefer the QFN package, so the other packaging option induce a higher per-device packaging cost.
EEs will prefer packages they're used to work with – QFN is among these, and one of the, if not the, easiest to solder correctly IC package.
Leaded packages are unattractive for applications that are subject to vibrations and the like, which is exactly where a lot of accelerometers are used. |
H: Understanding Responsivity of an IR sensor paired with a black body IR emitter
I'm building an air quality measurement system with the following components:
IR Dual Channel sensor by Pyreos
Axetris IR Balck Body emitter
Reading the sensor datasheet I've seen there's the Responsivity expressed as 150.000 V/W at 10 Hz.
Now, I was trying to figure it out how this value is translated on my specific case.
According to Wikipedia
Responsivity measures the input-output gain of a detector system. In the specific case of a photodetector, responsivity measures the electrical output per optical input.
The datasheet of the emitter says that the Optical output power between 4 μm and 5 μm is 4.2 mW. Just for the sake of this example, let's say that the same power is seen by the sensor (of course it is not).
Do I need to check now on the graph of the Responsivity the value for my Frequency, which is, if the wavelength is 4.5 μm, 66.6 MHz? And, with this value, to know the output voltage do I need to multiply the optical power seen by the sensor (that we have supposed to be the same of the emitter, 4.2 mW) to find out the final output voltage of the sensor?
Does it make sense and How the input voltage affects the output as soon as on the datasheet is written: "Output voltage normalized around mid-rail"?
Thanks
AI: Do I need to check now on the graph of the Responsivity the value for
my Frequency, which is, if the wavelength is 4.5 μm, 222 kHz?
No, whatever power enters through the window of your detector, the detector converts that to a voltage in the ratio of 150 volts per watt.
So, if all 4.2 mW entered the window, you would get a voltage change at the output of 0.15 volts per milliwatt or about 600 mV.
Looking at the data sheet, I believe you meant to write 150,000 volts per watt for the detector’s responsivity. |
H: What would be the difference between the output of a power bank and a laptop usb port?
I have an encoder which I have been successfully using when powered from my a USB2.0 outlet on my laptop. I need to be able to run it whilst disconnected from the laptop and so tried powering it from a power bank. This is only intermittently powering the encoder with more noise in the returned signal when it was working.
The encoder requires between 4.75V and 5.25V the power bank supplies 5V
The encoder requires current of 0.07A the power bank provides up to 2.1A
The encoder is the MAXON EASY 16
https://www.maxongroup.com/medias/CMS_Downloads/DIVERSES/ENXEASY_EN.pdf
Pg 19
The Power bank is:
http://www.energizerpowerpacks.com/product_page.php?p_code=P0014&l=en
What is different between the usb port and the power bank? And how could I replicate the output of the usb port without having to connect to a laptop?
AI: The product page for the power bank says that it guarantees against short circuit, and over-charging for your portable battery and devices.
Opened up the user manual and the third line under Safety Instructions states once the product is fully charged, your power pack automatically shuts off to save power.
My guess is that this is messing with your desired operation. Your laptop is outputting a steady 5 V and doesn't care about overcharging, whereas the power bank is treating your encoder like a phone and so it shuts down intermittently.
I suppose a solution would be to determine what the power bank uses as a queue to shut off and circumvent that or buy a different power bank that doesn't shut down.
EDIT: I found the following on the Arduino Stack exchange: Power bank turns off spontaneously.
It agrees with my thought but also adds that a possible reason is that your load is too small. You could try putting an additional load in parallel with your encoder to trick the power bank. |
H: Half Bridge with PWM- gate signal problem
After making a half bridge inverter using a square wave (I asked a question on this website about it previously) I have now decided to use PWM signal in order to obtain a sine wave output and have come across some new issues.
Here is the circuit on LTSpice:
The load is an 100u inductor and the PWM is created and input into a gate drive before it is used to drive the MOSFETs.
When I run the simulation, the high side mosfet signal (HO)- linked to the behaviour of Vs- follows the shape of the current in the inductor L1 and does not reach 0V and turn off as it should for much of the time causing the circuit to not work.
If I increase the inductor value and therefore decreased the current in the circuit the problem is fixed. However I am not sure I understand why a high current would cause the voltage to not reach zero.
If I replace the inductor with a resistor as the load then the problem also is fixed suggesting it is due to the behaviour of the current due to the inductive load?
Here are the waveforms from the simulation- the second set is the same but zoomed in to better show what is happening.
Can anyone explain what is causing this or what I need to change in order to fix this circuit?
AI: You are correct: it is the inductive current causing your problem.
You can see from your simulation output that the voltage vs is sometimes greater than vrail. During this time, M1 is conducting in the reverse direction through its body diode and back into the supply, regardless of the gate voltage. When you put in a resistive load, the voltage will not exceed the rail, and hence M1 can be controlled by its gate.
Good luck! |
H: VBE Multiplier with Emitter Resistance Cancellation
Function of shown circuit is biasing of output power stage complementary audio amplifier. P1 allows precise adjustment of \$ V_{CE} \$ of VBE multiplier and \$C_B\$ improves its rail rejection. \$ r_e' \$ negates unwanted effects of \$ r_e \$, also known as current-dependent emitter resistance.
simulate this circuit – Schematic created using CircuitLab
In book about construction of audio amplifiers, writer G. Randy Slone wrote next about this circuit and \$ r_e \$ effect cancellation:
"Re prime (that is \$ r_e \$) manifests itself as small Vbias changes brought on by power supply rail variations and small current variations through Qbias relevant to temperature. To negate the effects of re prime, a resistor can be placed in the collector circuit of Qbias to provide slight modification of the voltage drop across P1."
I don't get it why has \$ r_e \$ any influence on \$ V_{CE} \$ of VBE multiplier regarding rail variation and current variation of Qbias due to temperature variations. Or does it represents and error regarding resistive voltage divider with potentiometer connected to base of Qbias? As far as I know, it is just a series resistance with emitter that changes with quiescent current of Qbias. Why would \$ r_e \$ cause any error in setting of bias voltage for the following output stage anyway? Also, in what manner \$ r_e' \$ opposes/ negates effects of \$ r_e \$?
AI: I'd like to somewhat simplify the schematic you've got, so that we can temporarily avoid having to continually discuss the potentiometer when the real purpose is supposed to be trying to understand the circuit:
simulate this circuit – Schematic created using CircuitLab
In the above, I've provided a behavioral model on the left side. It's followed up on the 1st order BJT \$V_\text{BE}\$ multiplier topology without compensation for varying currents through the multiplier block in the middle example. On the right, is a 2nd order BJT \$V_\text{BE}\$ multiplier topology that includes compensation for varying currents through the block.
Everything starts by analyzing the middle schematic. How you analyze it depends upon the tools you have available for analysis. One could use the linearized small-signal hyprid-\$\pi\$ model. But that assumes you fully understand and accept it. So, instead, let's take this from a more prosaic understanding of the BJT model that neglects any AC analysis. Instead, let's take it entirely from large signal DC models and just compare "nearby" DC results to see what happens.
Let's assume that we are using a constant current source which can vary its current slightly, around some assumed average value of \$I_\text{src}=4\:\text{mA}\$. For simplicity's sake, let's also assume that the value of the base-emitter junction, when \$I_\text{C}=4\:\text{mA}\$ exactly, is exactly \$V_\text{BE}\left(I_\text{C}=4\:\text{mA}\right)=700\:\text{mV}\$. Assume the operating temperature is such that \$V_T=26\:\text{mV}\$ and that the operating temperature doesn't change regardless of variations in \$I_\text{src}\$ under consideration.
Finally, we'll assume that variations in \$V_\text{BE}\$ follow the general rule developed from the following approximation:
$$\begin{align*}
\text{Assuming,}\\\\
V_{BE}{\left(I_\text{C}\right)}&= V^{I_\text{C}=4\:\text{mA}}_\text{BE}+V_T\cdot\operatorname{ln}\left(\frac{I_\text{C}}{I_\text{C}=4\:\text{mA}}\right)\\\\
&\therefore\\\\
\text{The change in }&V_\text{BE}\text{ for a change in }I_\text{C}\text{ near }I_\text{C}=4\:\text{mA}\text{ is,}\\\\
\Delta\, V_{BE}{\left(I_\text{C}\right)}&=V_{BE}{\left(I_\text{C}\right)}-V_{BE}{\left(I_\text{C}=4\:\text{mA}\right)}\\\\
&=V_{BE}{\left(I_\text{C}\right)}-V^{I_\text{C}=4\:\text{mA}}_\text{BE}\\\\
\text{Or, more simply,}\\\\
\Delta\, V_{BE}{\left(I_\text{C}\right)}&=V_T\cdot\operatorname{ln}\left(\frac{I_\text{C}}{I_\text{C}=4\:\text{mA}}\right)
\end{align*}$$
Is this enough to get you started?
Remember, when the \$V_\text{BE}\$ multiplier is used as part of the class-AB amplifier's output stage, the current source itself varies somewhat with respect to power supply rail variations and also variations in the base drive for the output stage's upper and lower quadrants. (The upper quadrant, when it needs base drive current, will be siphoning off current away from the high-side source and therefore this will cause the current through the \$V_\text{BE}\$ multiplier to vary -- sometimes, depending on design values, varying a lot.)
Can you work through some of the math involved here? Or do you need more help?
(I just noted where that capacitor is sitting in your diagram. I think it should be between the collector and emitter. But who knows? Maybe I'm wrong about that. So let's leave that for a different question.)
Usual \$V_\text{BE}\$ Multiplier Equation
This will be a very simplified approach, for now. (The model here will need adjustments, later.) We'll assume that the bottom node (\$V_-\$) will be grounded, for reference purposes. It doesn't matter if this node is attached to the collector of a VAS and the actual voltage moves up and down in a real amplifier stage. The purpose here is to figure out the \$V_\text{BE}\$ multiplier voltage at \$V_+\$ with respect to \$V_-\$.
Note that the base voltage of the BJT, \$V_\text{B}\$, is also exactly the same as \$V_\text{BE}\$. So \$V_\text{BE}=V_\text{B}\$. I can use either one of these for the purposes of nodal analysis. I choose to use \$V_\text{BE}\$ as the name of the node at the BJT base. The simplified equation is:
$$\frac{V_\text{BE}}{R_1}+\frac{V_\text{BE}}{R_2}+I_\text{B}=\frac{V_+}{R_1}$$
(The outgoing currents are on the left and the incoming currents are on the right. They must be equal.)
We also have a current source. I'll call it \$I_\text{src}\$. For the middle circuit above, part of that current passes through \$R_1\$ and the rest of it passes through the collector of \$Q_1\$. The base current is the collector current (\$I_\text{C}=I_\text{src}-\frac{V_+-V_\text{BE}}{R_1}\$) divided by \$\beta\$. Given \$I_\text{B}=\frac{I_\text{C}}{\beta}\$, we can rewrite the above equation:
$$\frac{V_\text{BE}}{R_1}+\frac{V_\text{BE}}{R_2}+\frac{I_\text{src}-\frac{V_+-V_\text{BE}}{R_1}}{\beta}=\frac{V_+}{R_1}$$
Solving for \$V_+\$, we find:
$$V_+=V_\text{BE}\left(1+\frac{R_1}{R_2}\frac{\beta}{\beta+1}\right)+I_\text{src}\frac{R_1}{\beta}$$
When the second term is small (or neglected), then the first term can be simplified by assuming \$\beta\$ is large and the whole equation becomes:
$$V_+=V_\text{BE}\left(1+\frac{R_1}{R_2}\right)$$
Which is the usual equation used to estimate the voltage of a \$V_\text{BE}\$ multiplier.
Just keep in mind that this is highly simplified. In fact, too much so. The value of \$V_\text{BE}\$ is considered a constant and, in fact, it's not at all a constant. Instead, it is a function of the collector current. (Also, we neglected the second term. That term may matter enough to worry over, depending on the design.)
Since the \$V_\text{BE}\$ multiplier actually multiplies \$V_\text{BE}\$ by some value greater than 1, any erroneous estimations about \$V_\text{BE}\$ will be multiplied. And since the current source used in a practical circuit is also providing the upper quadrant with base drive current for half of each output cycle before it reaches the \$V_\text{BE}\$ multiplier, the value of \$V_\text{BE}\$ will be varying for that half-cycle because its collector current will be also varying.
Anything useful that can be done (cheaply) to improve how it varies in those circumstances should probably be done. One technique is to just slap a capacitor across the middle \$V_\text{BE}\$ multiplier circuit. But another technique is to use a collector resistor, \$R_\text{comp}\$ in the above right side schematic.
Analyzing the Middle Schematic for Collector Current Variations
None of the above equation development is all that useful for working out the effect of varying values for \$I_\text{src}\$. There are a number of ways to work it out.
One useful simplification is to imagine that there is a tiny resistor sitting inside of the BJT and located just prior to its emitter terminal:
(Both of the above images were borrowed from page 193 of "Learning the Art of Electronics: A Hands-On Lab Course" by Thomas C. Hayes, with assistance from Paul Horowitz.)
The above concept represents the dynamic resistance, which is just the local slope along a non-linear curve.
This resistor is called \$r_e\$ and its value depends upon the emitter/collector current magnitude. You will see it as either \$r_e=\frac{V_T}{\overline{I_\text{C}}}\$ or as \$r_e=\frac{V_T}{\overline{I_\text{E}}}\$, where \$\overline{I_\text{C}}\$ and \$\overline{I_\text{E}}\$ are some assumed mid-point on the curve around which those currents vary. It doesn't really matter which you use, because modern BJTs have rather high values for \$\beta\$. So let's not fret over minutia and instead just assume \$r_e\$ is a function of the collector current.
Note: In the following, I will continue to follow above book's approach in calling this \$r_e\$, despite it being based upon the
collector current. In various literature, it may be denoted as
\$r_e^{\,'}\$ and it may be based upon the emitter current, instead,
as well. But for these purposes here, I intend to remain consistent
with the above book's approach.
If we accept this simplification for now, then we can consider that there is an internal \$V^{'}_\text{BE}\$ with a fixed value that sits between the base terminal and the internal side of \$r_e\$ and we lump all of the variations in our observed external measurement of \$V_\text{BE}\$ as being due to the collector current passing through \$r_e\$. This works okay as an approximate, improved model, so long as you don't deviate far from some assumed average collector current used to compute \$r_e\$. (Small-signal assumption.) [If it really does vary a lot (for example, say, the collector current varies from \$10\:\mu\text{A}\$ to \$10\:\text{mA}\$), then the \$r_e\$ model ceases to be nearly so useful.]
But let's say you design your current source so that \$I_\text{src}=4\:\text{mA}\$ and you don't expect the upper quadrant to require more than \$1\:\text{mA}\$ for its base drive. This means that your \$V_\text{BE}\$ multiplier will experience currents through it from \$3\:\text{mA}\$ to \$4\:\text{mA}\$ during operation. How much would you expect the \$V_\text{BE}\$ multiplier to vary its voltage under these varying circumstances?
Well, that's actually pretty easy. We've now lumped all of the variation in \$V_\text{BE}\$ as a result of our model's \$r_e\$, computed at some chosen mid-point collector current value. Since the multiplier multiplies the external, observable \$V_\text{BE}\$ and since that includes the effect of collector current upon \$r_e\$ we then can expect (using the highly simplified estimate developed earlier):
$$V_+=\left(V^{'}_\text{BE}+I_\text{C}\cdot r_e\right)\left(1+\frac{R_1}{R_2}\right)$$
So the variation in \$V_+\$ is due to the second term in the first factor, or \$I_\text{C}\cdot r_e\cdot \left(1+\frac{R_1}{R_2}\right)\$. (Note that \$I_\text{C}\$ in this factor is not the same as \$\overline{I_\text{C}}\$ used to compute \$r_e\$ so you cannot simplify the product of \$I_\text{C}\$ and \$r_e\$ here. In fact, the whole point in creating \$r_e\$ is that you can't make that cancellation.) If you lump the last two factors there into an effective "resistance" value that the collector current must go through, then that resistance would be \$r_e\cdot \left(1+\frac{R_1}{R_2}\right)\$.
Which is just what G36 mentioned as the effective resistance for the middle schematic.
Adding a Collector Resistor to the \$V_\text{BE}\$ Multiplier
Now, keep in mind that the collector current does in fact vary, in operation. Perhaps like I mentioned above. Perhaps more. Perhaps less. But it does vary. How important that is will depend on your schematic and your design choices. But let's assume it is important enough that you are willing to consider adding a cheap resistor to the collector leg as shown in the schematic on the right, above. (You've been told that this is a "good idea.")
Why is this a good idea? Well, at first blush it should be easy to see that if the collector current in the middle circuit increases then the \$V_+\$ increases by some small amount. But what if we added a collector resistor? Wouldn't that mean that if the collector current increased, that the collector voltage itself would drop because of the change in the voltage drop through the collector resistor? Does this suggest to you that if you could pick the right value for this collector resistor, then you might be able to design it just right so that the increased drop across it just matched what would otherwise have been an increase in \$V_+\$ in the middle circuit?
If you agree with that logic, can you also now work out how to compute a value for \$R_\text{comp}\$ that would be "just right" and then compute the new effective resistance of the new circuit?
Just think about this for a moment. You have a \$V_\text{BE}\$ multiplier here and you know the approximate equation used to compute its voltage. But this equation doesn't take into account the fact that \$V_\text{BE}\$ changes when the collector current changes. The value of \$r_e\$ (at some design value for the collector current) is the tool that helps you quantify the change in \$V_\text{BE}\$ for changes in the collector current. And you know that the \$V_\text{BE}\$ multiplier will multiply that change, too. So if the collector current increases (because the upper quadrant stops requiring base drive current, leaving all of the current source's current to flow through the multiplier), then the multiplier's voltage will increase by the multiplied change in drop across \$r_e\$. To counter this effect, you want the collector resistor's voltage drop to likewise increase by just that same amount.
So, does that help you think about how to compute the collector resistor value? As a first approximation, wouldn't you want the value to be about \$R_\text{comp}\approx r_e\left(1+\frac{R_1}{R_2}\right)\$ so that when the change in collector current creates a multiplied change in \$V_\text{BE}\$ that the drop in this newly added collector resistor will just match up with it?
More Detailed Analysis Related to Selecting \$R_\text{comp}\$
The actual multiplier voltage will be better approximated with the more complex version I developed from nodal analysis:
$$V_+=V_\text{BE}\left(1+\frac{R_1}{R_2}\frac{\beta}{\beta+1}\right)+I_\text{src}\frac{R_1}{\beta}$$
For example, assume \$I_\text{src}=4\:\text{mA}\$ and an operating temperature that sets \$V_T=26\:\text{mV}\$. Also, let's assume we use \$R_1=R_2=4.7\:\text{k}\Omega\$. And let's assume \$\beta=200\$ for the BJT we have in hand, right now. Let's also assume that the base-emitter voltage is taken as \$V_\text{BE}=690\:\text{mV}\$ (I'm picking an odd value on purpose.) Then the first term's value is \$\approx 1.38\:\text{V}\$. But the second term's value is \$\approx 100\:\text{mV}\$. So we'd really be expecting perhaps \$\approx 1.48\:\text{V}\$ for the multiplier voltage.
Now let's take the above equation and work through the details of what happens when the current passing through the \$V_\text{BE}\$ multiplier changes (which it will do because of the upper quadrant base drive variations, in operation):
$$
\newcommand{\dd}[1]{\text{d}\left(#1\right)}
\newcommand{\d}[1]{\text{d}\,#1}
\begin{align*}
V_+&=V_\text{BE}\left(1+\frac{R_1}{R_2}\frac{\beta}{\beta+1}\right)+R_1\,\frac{I_\text{src}}{\beta}\\\\
\dd{V_+}&=\dd{V_\text{BE}\left(1+\frac{R_1}{R_2}\frac{\beta}{\beta+1}\right)+R_1\,\frac{I_\text{src}}{\beta}}\\\\
&=\dd{V_\text{BE}}\left(1+\frac{R_1}{R_2}\frac{\beta}{\beta+1}\right)+\dd{R_1\,\frac{I_\text{src}}{\beta}}\\\\
&=\dd{I_\text{src}}\,r_e\,\left(1+\frac{R_1}{R_2}\frac{\beta}{\beta+1}\right)+\dd{I_\text{src}}\,\frac{R_1}{\beta}\\\\
&=\dd{I_\text{src}}\,\left[r_e\,\left(1+\frac{R_1}{R_2}\frac{\beta}{\beta+1}\right)+\frac{R_1}{\beta}\right]\\\\&\therefore\\\\
\frac{\d{V_+}}{\d{I_\text{src}}}&=r_e\,\left(1+\frac{R_1}{R_2}\frac{\beta}{\beta+1}\right)+\frac{R_1}{\beta}
\end{align*}$$
The first term is about what I wrote earlier about the estimated impedance of the multiplier. But now we have a second term. Let's check out the relative values (given the above assumptions about specific circuit elements and assumptions.)
Here, after accounting for the base resistor divider pair's current and the required base current, the first term is \$\approx 14\:\Omega\$. The second term is \$\approx 24\:\Omega\$. So the total impedance is \$\approx 38\:\Omega\$.
Please take close note that this is actually a fair bit larger than we'd have expected from the earlier simplified estimate!
So the \$V_\text{BE}\$ multiplier is worse than hoped. Current changes will have a larger than otherwise expected change. This is something worth fixing with a collector resistor.
Suppose we make the collector resistor exactly equal to this above-computed total resistance. Namely, \$R_\text{comp}=38\:\Omega\$. The reason is that we expect that the change in voltage drop across \$R_\text{comp}\$ will just match the increase/decrease in the \$V_\text{BE}\$ multiplier as both are then equally affected by changes in the collector current due to changes in \$I_\text{src}\$. (We have so far avoided directly performing a full analysis on the right-side schematic and we are instead just making hand-waving estimates about what to expect.) Given the prior estimated impedance and this circuit adjustment used to compensate it, we should expect to see almost no change in the voltage output if we used the right-side schematic.
Here is the LTspice's schematic I used to represent the right-side, compensated schematic:
And here is LTspice's plotted analysis of the \$V_+\$ output using a DC sweep:
Note how well the output is compensated! Note the peak is located almost exactly where our nominal value for \$I_\text{src}\$ is located, too?
The idea works! Both in terms of being compensated exactly where we want that compensation as well as in providing pretty good behavior nearby. Not bad!!!
Appendix: Derivation of \$r_e\$
I'm sure you remember the equation I'll start with. Just follow the logic below:
$$
\newcommand{\dd}[1]{\text{d}\left(#1\right)}
\newcommand{\d}[1]{\text{d}\,#1}
\begin{align*}
I_\text{C}&=I_\text{sat}\left[e^{^\frac{V_\text{BE}}{\eta\,V_T}}-1\right]\\\\
\dd{I_\text{C}}&=\dd{I_\text{sat}\left[e^{^\frac{V_\text{BE}}{\eta\,V_T}}-1\right]}=I_\text{sat}\cdot\dd{e^{^\frac{V_\text{BE}}{\eta\,V_T}}-1}=I_\text{sat}\cdot\dd{e^{^\frac{V_\text{BE}}{\eta\,V_T}}}\\\\
&=I_\text{sat}\cdot e^{^\frac{V_\text{BE}}{\eta\,V_T}}\cdot\frac{\dd{V_\text{BE}}}{\eta\,V_T}
\end{align*}
$$
Since \$I_\text{sat}\left[e^{^\frac{V_\text{BE}}{\eta\,V_T}}-1\right]\approx I_\text{sat}\cdot e^{^\frac{V_\text{BE}}{\eta\,V_T}}\$ (the -1 term makes no practical difference), we can conclude:
$$
\begin{align*}
\dd{I_\text{C}}&=I_\text{C}\cdot\frac{\dd{V_\text{BE}}}{\eta\,V_T}
\end{align*}
$$
From which very simple algebraic manipulation produces:
$$
\newcommand{\dd}[1]{\text{d}\left(#1\right)}
\newcommand{\d}[1]{\text{d}\,#1}
\begin{align*}
\frac{\dd{V_\text{BE}}}{\dd{I_\text{C}}}&=\frac{\d{V_\text{BE}}}{\d{I_\text{C}}}=\frac{\eta\,V_T}{I_\text{C}}=r_e
\end{align*}
$$
The idea here is that the active-mode BJT Shockley equation, relating the base-emitter voltage to the collector current, is an exponential curve (absent the -1 term, anyway) and the value of \$r_e\$ is a way of representing the local slope (tangent) of that curve. So long as the deviation of the collector current away from where this dynamic resistor value was computed is small, the value of \$r_e\$ doesn't change much and you then easily estimate the small change in \$V_\text{BE}\$ as being caused by the small change in collector current through this dynamic resistor.
Since the collector current must be summed into the emitter current, \$r_e\$ is best "visualized" as "being right at the very tip of the emitter." This is so that changes in the collector current cause a change in the base-emitter voltage. (If you'd instead imagined \$r_e\$ as being at the collector tip, it would not affect the base-emitter voltage and so would be useless for the intended purpose.) |
H: Why are off grid solar setups only 12, 24, 48 VDC?
I am looking to produce 50kW for an off grid solar project. Ideally, I'd like to have a high voltage DC battery system with a high power battery inverter and charge controller.
I have only found a couple high VDC (384V) inverters and charge controllers, and they are from Chinese manufacturers. I'm a little hesitant to buy from them.
All of the big companies only use 12, 24, and 48 VDC. I understand that it's the most common and you don't need much power for a home, but if I want to produce 50kW at 48 VDC that's over 1000A! If I run that in parallel to 48V inverters I would need 10 or more inverters. That is a lot of wire and work.
Is there an electrical reason why they would cap these products at 48 VDC?
AI: 60VDC is the cut-off for Safety Extra Low Voltage, or SELV, as spelled out in UL 60950-1. Besides being lower voltage, SELV circuits are also isolated from the mains by reiniforced insulation, which has specific spacing and materials requirements.
In general terms, SELV voltages are ‘touch safe’, meaning that they don’t present a shock hazard with direct contact. 48V falls below this SELV threshold with some margin. It’s also conveniently four ‘12V’ lead-acid batteries connected in series (really up to about 58.8V at full float charge.)
Voltages above the SELV level are considered in the same class as line voltage, and typically require an electrician to install. Reason? Electricians are familiar with codes and techniques to protect against inadvertent contact with potentially lethal voltages, including use of proper materials, fusing, fault protection, enclosures and cable routing.
More here: https://www.edn.com/electronics-blogs/power-supply-notes/4414411/What-does-SELV-mean-for-power-supplies |
H: Why do I need to convert impedance to admittance in this problem
1 = 18 A, 2 = 15 A , = 30 A and R2 = 4 Ω.
determine 1 and L .
This is the picture of the circuit:
And this my solution all the way to the point where I have uncertainties:
Now I can solve this in two ways:
I can solve it without converting impedance to admittance (gives the wrong answer)
Or I can solve it with converting impedance to admittance (gives the correct answer)
My question is why will I get the wrong answer when not converting impedance into admittance, it seems mathematically correct, but it is the wrong solution, can someone explain?
AI: I think there are two things at work here:
1. You are not taking into account that \$R_1\$ and \$X_L\$ are in parallel and not in series.
When you write \$ Z_1 = R_1 + X_L \$, you are stating that they are in series, not in parallel.
The lumped impedance of \$Z_1\$ does have a form:
\$Z_1 = Re(Z_1) + Im(Z_1)j\$
But these real and imaginary terms are not \$R_1\$ or \$X_L\$
You should have written \$ Z_1 = R_1 || jX_L \$ and expanded that out. Then your math will accurately reflect that they are in parallel. The real terms in this expanded expression will be \$R_1\$ and the imaginary terms will be \$X_L\$.
2. You cannot just invert the real and imaginary components of a complex impedance/admittance to find the admittance/impedance.
For example:
\$ Z = 2 \angle{45} = \sqrt{2} + j\sqrt{2}\$
\$ Y = \frac{1}{Z} = \frac{1}{2}\angle{-45} = \frac{1}{2\sqrt{2}} - j\frac{1}{2\sqrt{2}} \$
We agree on those right?
But then in your second solution, you try and calculate Y by individually finding the reciprocal of the real and imaginary components of Z:
\$Y_{wrong} = \frac{1}{Re(Z)} + j\frac{1}{Im(Z)} =\frac{1}{\sqrt{2}} + j\frac{1}{\sqrt{2}}\$
Or maybe:
\$Y_{wrong} = \frac{1}{Re(Z)} + \frac{1}{Im(Z)j} =\frac{1}{\sqrt{2}} - j\frac{1}{\sqrt{2}}\$
if you thought that the \$j\$ should be included in the reciprocal.
You might start feeling that something is wrong and doesn't make sense here because it's inconsistent. Look at the \$j\$. It only makes sense that you would have to include it as part of the reciprocal so it ends up in the denominator (or in the numerator as a \$-j\$)...but at the same time, if you do that then it's obviously wrong since there is an inductance, not a capacitance so \$-j\$ is obviously wrong. It doesn't feel right both ways and that's because it is wrong.
Either way, obviously \$Y \ne Y_{wrong}\$ so it does not work. It does not work because the real and imaginary components are tied together and so you can't just break them apart and invert them individually.
Here's an interesting exercise: What happens if you try to find the admittance of a resistor \$R\$ by doing what you tried to do, except now think of it as \$ R + 0j \$?
You get a divide by zero! We both know that in the end you do get \$ Y = \frac{1}{R}\$ but math to actually get there is is different. The reciprocal of a the singular component (real or imaginary, as long as there is only one) is just a shortcut that only works in that circumstance. It cannot be applied to complex numbers in general.
So I think your second solution is wrong too. You just happened to accidentally account for the parallel \$ R_1\$ and \$X_L \$ when you incorrectly tried to calculate Y by individually inverting the real and imaginary components of Z. |
H: Adding small AC signal to a DC current using inductive coupling
Is it possible to add a small AC current on to a DC current in a wire using inductive coupling from an external source. Or if there is another way, I want to add AC ripples to a DC current.
simulate this circuit – Schematic created using CircuitLab
AI: First, there will be just one current in the circuit you show. The current into the coil must be the same as the current out of the cell.
You can do pretty much exactly what you have drawn. Use a transformer with the secondary winding connected as shown, in series with your dc source. You can use a function generator or other sine wave source connected to the primary winding of the transformer.
You will need to select a transformer designed to work at the ripple frequency of interest. If you want your ripple to be at the mains frequency this should be easy. |
H: Why don't we worry about reverse recovery voltage and current of a diode in a simple circuit application?
When the diode goes into reverse recovery, why don't we worry about the power supply in that specific time since the current will be increased? Will it not damage the power supply or is it since it is happening so quick that realistically no damage occurs from that extra current needed to remove the negative charges?
For example, if a resistor was placed after the switch in the circuit above and that resistor is chosen could only handle for example 10A (theoretically). When the switch closes and the diode goes into reverse recovery, the current from the power supply now is 12.5 A. Hence 12.5A goes across the resistor. Is this ok since the reverse recovery time of a diode is extremely quick so realistically it has no effect on the components?
Basically when do we need to take into account the reverse recovery characteristics of a diode? Typically does it not affect the external components attached to the diode since that extra current needed only is occurring for an extremely short period of time?
When do we need to worry about the reverse recovery characteristics and what could that extra current potentially do to external components
Typically I have been told in simple circuit application to just choose a fast-acting diode that can withstand a high amount of voltage but I am confused for other components as potentially could they not be destroyed from extra current?
AI: If the freewheel diode was conducting due to inductive stored energy and the switch was closed there would be a current spike. If the switch was solid state like say a mosfet or a BJT then a slow diode will stress the switch more at turn on. If the frequency is high the extra dissipation can be of concern. This current spike can also cause EMC issues. |
H: How to set extra source and drain pins in PMOS power chip
I am using the FDS4685 chip here: https://www.onsemi.com/pub/Collateral/FDS4685-D.PDF
It has 4 drain and 3 source pins which I haven't seen before. I only need 1 source and 1 drain, how do I determine whether to connect these pins to ground or high to my 5V supply? Is it wrong to just keep it floating?
AI: Tie them together on the board (drains together, sources together.) The multiple pins are provided to reduce the resistance to the FET.
Multiple connections to the same high-current signal are common with high-power / high-current devices. This one supports 8.2A; it needs more than just one SOIC pin to handle that much current.
The other purpose of multiple-lead connections like this is to help with thermal dissipation from the package: the leads conduct heat, too. |
H: How to measure laser class of LIDAR
I just bought a LIDAR with the intention of building an indoors robot that uses it to navigate rooms in my apartment.
The manufacturer website states:
Low power infrared transmitter conforms to the Class 1 laser safety
standard and reaches the human eye safety level.
So, I went to the FDA site about laser classes to inform myself about it (https://www.fda.gov/radiation-emitting-products/home-business-and-entertainment-products/laser-products-and-instruments), and in fact they say Class I is considered non-hazardous (with hazard increasing if viewed with any optical aid such as a magnifier).
It seems this is a Chinese product, I don't know if the FDA is actually regulating this product, or the manufacturer is including this just as a reference.
I'm concerned about the safety of pets and people in the household, so I thought it could be a good idea not to trust this Class I rating at face value, but to double-check the emission intensity with some equipment.
Can this measuring be done and if so with what equipment? Does it make sense to do it?
(I tried posting this question in the robotics site, no answer after a week there, so I'm trying here as well).
Thanks!
AI: You can't simply measure whether a device is in a particular laser class.
Because to be class 1, you have to not only operate the laser below a specified level (around 1 mW for visible and near IR wavelengths), you also have to design the laser so that under any foreseeable single-fault condition the power will remain below that level.
Demonstrating this requirement isn't done with a measurement, but either by destructive testing of several laser samples, or a complete knowledge of the laser and power supply design so that the likely failure modes and their consequences can be analyzed.
I thought it could be a good idea not to trust this Class I rating at face value, but to double-check the emission intensity with some equipment. ... Can this measuring be done and if so with what equipment? Does it make sense to do it?
You can measure the laser power with an appropriate optical power meter. These are available from several companies. The exact model you should get depends on the wavelength of your laser and the diameter of the beam.
Expect the cost for an off-the-shelf instrument to be in the range of $500 - $5000. |
H: For a MOSFET, does capacitive gate current only flow through to the source?
Everywhere states that a MOSFET has no current at the gate. However, this is not true since, during the time the gate capacitance charges up to reach the certain voltage threshold, the current is entering the gate capacitance.
My actual question:
During the charge-up time of the "gate capacitance", does the current only go to the source? (Meaning current enters the gate capacitance while charging and then leaves through the source for a N channel type?)
AI: Remember, there are two (major) gate capacitances in a power MOSFET: the gate-source capacitance Cgs, and the gate-drain capacitance, Cgd. If the drain voltage never changes, then yes, all gate current goes into the gate-source capacitance.
However, in power MOSFETs, the drain voltage almost always changes. During turn-on, at the point of switching, the drain voltage in an NFET will start to fall. This discharging of Cgd causes current to flow from the gate to the drain. Depending on drain-source voltage Vds, transistor and driver parameters, the gate-source voltage Vgs may have a small inflection point, a Miller plateau, or wild oscillation.
For a better discussion on the Miller plateau, see this answer |
H: Barrier voltage and knee voltage are give same meaning?
Is there any difference between barrier voltage and knee voltage? Could somebody please let me know, if any.
AI: Barrier voltage is minimum amount of voltage required to conduct the diode , and Knee voltage is the same
Image Source : Quora |
H: How to control optocoupler and relay by using microcontroller
I have a question about controlling relays by using optocoupler. I have some optocouplers to isolate MCU's power and Power supply for the relays (to control LEDs, DC and AC drivers).
1) Can I control the relay according to the schematic below but without FET? Do I have to use FET.
2) I have a 24V DC and 220V AC to control relays, can I use the same schematic for AC connections?
Thank you so much.
AI: You can replace the FET with the optocoupler if it's maximum rated collector current is higher than the current drawn by the relay's coil and if it's collector-emitter breakdown voltage is higher than the voltage you are using to power the coil. But remember to keep the MCU ground and relay ground separate if you use optocouplers otherwise it defeats the purpose of the isolation.
You cannot use 220V AC to power a dc coil relay. If you want to use AC you can use a triac instead of a relay and a triac-output optocoupler, but then the load has to be AC too, otherwise the triac will stay "on".
NOTE: If you use a relay, remember to add a flyback/free-wheeling diode reverse biased toward the 24V source to avoid inductive voltage spikes that will fry the opto. |
H: Does the LT Spice monte carlo simulation definitely output the max and min voltage for any number of simulation runs?
I have the following circuit, in which i need to calculate max and min voltages possible (considering resistor tolerance).
Does running a monte carlo simulation consider the absolute max and min values of voltage output that can be obtained at every point in the circuit, or is it purely based on number of iterations in the step command? Basically, if I do only two rounds of simulation, I must get the absolute max and min voltages at every point in the circuit. Is this method the right one for my needs? If not, please provide a step-by-step solution for my case.
AI: A brute force solution can be implemented using the following number system.
This described method will cover the minimum and maximum result.
For example, using a decimal number system, you can specify the tolerance by 10 values.
The number of digits is the number of components you want to vary.
ABCD
0000 all components have their minimum value
0001
0002
....
0009 components A,B and C have their minimum value and component D its maximum value
....
9999 all components have their maximum value
Now, using the LSpice directive .step param run 0 9999 1 you can use the run parameter to define the value of each component.
Using floor() you can select the component, e.g. component C is selected by floor(run/10)-floor(run/100)*10.
(Note that the divisors are powers of 10, because a decimal number system is used. For other number systems, corresponding divisors must be used).
In the decimal number system, we have 10 tolerance values. Spreading them evenly, the digit has to be subtracted and then divided by (10-1)/2. So, digit value "0" gives -1, digit value "9" gives +1.
With these notes, if the value for component B is e.g. 680Ω 1% the LTspice resistor value becomes R={ 680 * (1 + 0.01*( floor(run/10)- floor(run/100)*10 -4.5)/4.5 )}
Below is a comparative result. I didn't have the diode, so I left it out.
The first waveform is the total result, the second waveform zoomed in at the maximum values, the third waveform zoomed in at the minimum values, the last waveform is showing the values of the resistors. |
H: How to run hair trimmer's motor normally (without pulsing) using 12V/1A charger? Its orignal cell battery uses 1.34V and 0.34A
I want to run a small DC motor (which is operated on a 1.2V 600 mAh cell battery) using 12V charger. Now the problem is that when I connect the motor to 12V, 5V, 3.3V the motor starts pulsing (I mean starts for 0.5 second and turn off for 0.5 seconds and the loop keeps on), but when I connected 3.7V 18650 cell battery directly to it, the motor start to rotate at full speed. Furthermore I connected the motor to 5V power supply but again the power supply turns off and on again repeatedly. Last but not least, when I connect the terminal of power supply to one of the terminal of motor due to sparking it start rotating normally, but without sparking it keeps on pulsing.
Kindly tell me how to run the motor smoothly and what components should be used? Basically I'm trying to power Kemei Hair Trimmer using 12V charger.
AI: Your power-supplies are not capable of putting out enough power to run the motor, as a result, when you connect the motor, the power-supply goes into shutdown.
Maybe add a DC-DC buck converter with a input current limit between the supply and the motor, set the current limit such that the power-supply is happy.
Search "CC-CV DC-DC Buck" for some examples, prices start at around $1.50 |
H: Can I use the power from a computer to run my board?
I have an industrial PC which has some usb ports, and I want to use that port to get 5V and convert it to the 3.3V (with decoupling caps). But now I need to use that 5V to drive 12 relays on my board.
My question is that, Can I directly use that 5V, or should I add some filters or sth? I worry about the computer, when I connect them, can any current flow back and destroy the computer? Here is the schematic of my power port and 3.3V output.
AI: Officially, an USB device has 1 power unit (100mA) when nothing more was negotiated. It's slightly more complicated if you add standby power requirements to the equation, since this power is time limited, but I have yet to see a host that enforces this.
After the 1 power unit (100mA) you can ask for more. This can be rejected.
However, many PC USB 2.0 ports are always capable of 500mA unless otherwise specified. An USB 3.0 can negotiate up to 900 mA, most likely this will be available without special enumeration.
Higher voltages with USB PD Type C are not possible, you must use an USB PD Controller for that.
One other consideration is the maximum inrush specification. I recall for USB 2.0 this being 4.7uF, which is quite low.
Enough XY-problem. You are talking about an industrial PC. This is most likely fed from a 24V uninterrupted supply bus which you can also use for this. That is much more suitable and standard way of doing things. |
H: Memory capability and powers of 2
Why are computer memory capabilities often multiples of a power of two, such as 2^10 = 1024 bytes?
I think it is something related to the binary logic, but I do not understand the precise reason for that.
AI: A 1024 x 1 memory chip requires 10 address lines and you get full utilisation of all addresses. Now, if someone brought out a 600 x 1 memory chip, it would still need 10 address lines. It can’t use 9 because that could only uniquely define 512 memory positions.
Then think of what would happen if someone wanted to use two of the 600 x 1 memory chips to give a combined memory size of 1200. How would the address lines (plus 1 more) cope with numerically embracing each address slot uniquely and, if there is an MCU incrementing through memory in order to store contiguous data, that MCU would need special knowledge about the binary address numbers that are unused. |
H: Replacing Sukam fusion 3.5kva Mosfet IRF2807
The mosfets on my Sukam Fusion 3.5kva inverter have blown, I want to replace them.
they have what looks like the following written on them
IRF2807
RP526Y
I can find IRF2807 but not IRF2807 RP526Y, would I need to replace the blown ones with the exact same mosfet?
P.S -This is a home DIY so, I'm not well versed with electronics per se.
AI: You can replace with any IRF2807, but be sure to check the gate drives of each mosfet bank. Or they might blow again.
First make sure that drive to them is still proper. |
H: How to determine cross section of cable
I'm looking at this table that suggests a 0.5mm2 cable could withstand 720W at 3A.
At the same time I used a cable size calculator online to determine the size of cable I would need to use for a 12V60A which is also 720W and it gave me 8mm2 as the result.
And thinking about it, it does make sense as current is a measure of electrons per unit time, being physical objects (not quite but I come from a programming background so that also makes sense to me) the more they are the more space they would need to move through. But where is the voltage in the equation, if we imagine voltage being the strength with which the electrons are moving it would also make sense to need a stronger conductor the higher the voltage and it doesn't seem to be the case? Is conductor cross section determined solely by amperage? Could a 0.5mm2 cable carry 3A at any voltage?
Quite a bunch of questions I posed there but I hope you get where my curiosity is coming from.
AI: The required wire size is determined by the current it is expected to carry, not by voltage or power. The voltage will determine the required insulation thickness or type.
The required wire size for a given current is determined by both resistance heating and voltage drop in the wire resistance |
H: LTC6991 : unexpected ouput signal
I'm trying to make a 50% oscillator with a Tout=10ms period. Instead, I get a 5.7ms period signal, and an off pulse (low level) of 150µs !
I chose POL=0 and RESET is grounded. From table 1 of the datasheet, I pick the 2nd line : DIVCODE=1, NDIV=8, R1=1Meg, R2=100K. Vdd=3.3V, Vdiv=0.29V => Vdiv/Vdd=0.088 (OK, should be 0.091). Then, I calculate Rreset = 50k/1.024ms*Tout/NDIV=61K, I pick 68K.
What's wrong ?
AI: Although for both schematics the 1 MΩ resistor is between the right middle pin and right lower pin, you have swapped the pins in your schematic.
In your schematic resistor R7 is connected between OUT (pin 6) and DIV (pin 4) instead of V+ (pin 5) and DIV. |
H: What does the minus sign mean in measurements in datasheet footprint drawings?
I have seen this more than once where there is a measurement such as "4-0.7". In this context, it clearly does not mean 4 to 0.7 mm so what does it mean. See the attached picture:
Link to datasheet: https://www.ckswitches.com/media/2780/pts526.pdf
AI: It looks to me that it means four 0.7 mm wide pads.
Another way of writing it would be 0.7 (4 places) |
H: Why does the max current capacity decrease with the number of cores for the same AWG?
I couldn’t find a similar or duplicate question yet but regarding this table for instance a 24 AWG wire can carry max 3.5A current if it is single core. But if it is seven core it can only carry max 1.4A.
Am I interpreting the information wrong?
Is this table valid for DC current as well? What really creates such big difference with different number of cores of the same AWG cable?
AI: There are two points of confusion here:
The difference between a CABLE vs a WIRE
The difference between a CORE, STRAND, and CONDUCTOR
A wire is just the conductor (either solid or multi-stranded) and some insulation whereas a cable is the entire assembly of conductor or conductors, the insulation, shielding, jacket, armour, and tensile cord, etc.
Note, that wire and cable sometimes get used interchangeably.
A strand unambiguously refers to the components of conductive metal in a wire that that come together to conduct a single electrical current (whether one big strand or many strands twisted together).
A conductor unambiguously refers to the conglomeration of strands and each conductor conducts just a single electrical signal.
A core has ambiguous usage and may refer to a conductor or a strand. Core and strands also gets used interchangeably sometimes so it can get confusing. For example, I always say solid-core to refer to single-stranded, but I never say multi-core to refer to multi-stranded. I just say mult-stranded. I also say multi-conductor when referring the number of wires in a cable and have never heard the word core used until today but I knew what it was when I saw it. So yeah, it can get confusing if you're not familiar.
When someone says a cable is xAWG, it means that each conductor/wire in the cable is x AWG. It does not mean that the cable as a whole is that AWG.
This is single-strand (or solid-core), single-conductor cable:
This is mult-stranded (I have never heard this referred to as multi-core), single-conductor cable:
This is solid-core, multi-conductor cable:
This is multi-stranded, multi-conductor cable:
Therefore, a multi-core cable is not the same as multi-stranded wire. A multi-core cable the conglomeration of multiple wires into the same cable so that multiple signals/currents can be conducted over the same cable.
Now the definitions are out of the way...you might ask: "Why does the ampacity decrease with more conductors/wires in a cable? After all, the resistance of each additional conductor/wire remains the same. If you double the number of conductors/wires, that halves the resistance of the cable. If you double the number of conductors, you double the ampacity of the cable."
Yes, adding more conductors/wires to a cable does increase the ampacity of the cable as a whole. And yes, doubling the number of conductors/wires also halves the resistance of the cable as a whole. And yes, the resistance of each additional conductor/wire added remains the same.
However, resistance is not the only thing that determines ampacity. Heat dissipation plays a role too and with more conductors/wires bundled close together, there not only less airflow and cooling that can occur, but extra heat sources around each individual conductor/wire. As a result, there are diminishing returns as you add more and more conductors/wires to a cable.
For example, a lone conductor/wire might carry 1A. But adding a second conductor/wire added might only add 0.9A of extra ampacity for a total of 1.8A. So the overall ampacity of the cable as a whole has increased from 1A to 1.8A, but the ampacity per conductor/wire has decreased from 1A to 0.9A.
From that perspective, it should make perfect sense why more conductors/wires in a cable reduces ampacity per conductor/wire, even as it increases the ampacity of the cable as a whole. |
H: LDO voltage regulator, no output
I'm making my own board based on Arduino. To power it I use one LIR2032 cell (button cell 3.6V li-ion battery) that's driven through 3.3V LDO voltage regulator. So far I have tried two different regulators:
MIC5504-3.3YM5-TR
LD59015C33R
and none of them worked, both did not give any voltage at the output.
So far I have tried:
connecting ENABLE pin to VIN (in the second one could be left floating, PCB has an error design with EN disconnected but I've already corrected it!)
replacing part (in case of ESD)
correcting joints (in case of cold joints, multimeter shows proper connection)
The voltage regulator is wired the same way as shown in the datasheet
.
The board works when powered via other source of power so it's not the case of it just not working. Trying different batteries and higher voltages did not solve the problem, output still shows 0V.
I have no idea what could be wrong, I hope you can help me with that!
If you have any other regulators to recommend please comment, board needs no more than 100mA.
(Yes component is bigger than the pads :) MIC5504 is slightly larger than LD59015 but I managed to solder it properly)
AI: Pad 3 is not connected to BP+, unlike shown in the schematic. This means EN is disconnected and the LDO won't power up.
I am not certain that the schematic shows a proper connection to the pin, as there is a 90° angle. This may confuse the software enough to not make a connection here. |
H: Clamp a D/A to prevent output below 0V
I am using a USB D/A converter to drive a high-voltage op-amp with a signal between 0 and 3V at 10 kHz. The D/A I am using is capable of outputting between -10 and +10 and unfortunately seems to always output a 1 ms long glitch of -10.7V around 20-30 ms before the signal generation begins.
Although I am trying to get the vendor to supply a software fix, I want a hardware solution to ensure the output can't drop below 0V even in the event of a software glitch.
I am aware of diode clamping circuits but the lowest voltage Schottky I can find is 220 mV with the exception of a very strange diode that seems to have a built-in FET (SM74611), which I'm guessing is not meant for this application.
I would like to keep the output above -100 mV and preferably above -10 mV. Are there any other "passive" solutions I'm overlooking that could accomplish this or do I have to go with something active like an op-amp?
If active is the only way, can someone suggest a suitable circuit?
AI: You could use a precision rectifier, something like the below circuit (you can modify the op-amp type as required, note the kink in the time domain where the op-amp recovers from saturation).
simulate this circuit – Schematic created using CircuitLab
This will get down to within uV of zero depending on the op-amp. If you need to go a bit lower, bias the top end of R4 with a suitable resistor to a negative source. Eg. a couple M ohms to -5 will allow it to go to -10mV.
If it's actually going to go below zero in normal operation and trigger the recovery from saturation you could find op-amps that recover faster (very few are actually rated for this operation) and/or you could use a different circuit such as this one:
simulate this circuit
There will still be a bit of distortion at the zero crossing because the op-amp output has to slew from -0.7V to +0.7V and that can't happen instantly, but there is no saturation. |
H: Bare minimum pins for VME64 Bus
I have a board on which core for VME64 bus is implemented and it communicates with other boards on VME64 bus using only 1 connector (P1, 5 rows, 32 pins on each row, total 160 pins) instead of 2 connectors (P1 and P2). I am trying to understand its working in both Bus controller and Remote terminal modes.
I have to modify the connector pinouts to make room for free pins on the P1, 160-pins connector. There is lot of info available about the bus but what I could not find out is the minimum pins required to use VME64 bus?
I can live with only 24-address lines on the bus. But I don't know which other lines I can compromise on to make room for free pins on the P1 connector?
AI: I don't know what else is in the system and using the bus, but I would look at the bus arbitration lines: e.g. BR3, BG3IN, BG3OUT.
If there are not other bus masters (Processors or DMA controllers) they should be unused. |
H: Use a dropping resistor or a zener diode?
I have a situation where I need to power an Arduino Uno R3 from a ~16V power supply.
The problem is I have no idea of the implications of voltage dropping. I think I just need a 100 Ohm resistor.
So, in case I am right, my actual question turns into: When is it better to complicate a circuit with a Zener diode?
I will also attach a photo since I understand it is easier to read from it.
Edit 1: A neighbor suggested a 4V LED that would also indicate it's working.
AI: If you need to power it, it is not a good idea. What happens, if your Arduino suddenly draws more current? -> The voltage drop across your resistor increases, the voltage on your Arduino supply decreases. Same case with diode.
Use a simple linear regulator or a switching DC/DC converter module. |
H: Common Mode Choke question
I have one question which I encountered during usage of common mode choke.
Question : "If a common mode choke is used on the main power / ground input, ensure that no I/O circuits effectively bypass the ground side of the common mode choke. For example, the shield of screened cables or I/O signal grounds may present a better AC ground to the module than the main power ground.
Common mode chokes may be needed on some I/O signals to prevent this."
Answer : "AC ground current may find an easier path to ground in the vehicle through IO circuits etc. rather than through the ground side of the common mode choke.
If this occurs, the function of the common mode choke is effectively neutralised, potentially causing conducted emissions concerns.
Normally signal grounds, cable shields would be connected to the "vehicle side" ground of the common mode choke, unless this causes a functional problem (e.g. accuracy of analog signals), in which case a common mode choke on analog / ground signals may also be needed."
I have the answer to the question also. But I am unable to understand it clearly. Can someone explain the question and answer in simple terms please.
What the question is asking me to check and how will I check in my design?
Thank you.
AI: For a common mode choke to be effective all the supply current it passes through one limb of the 2 coils must return through the other limb of the coils. If your load also makes a connection to ground then you have, in effect, short circuited the 2nd limb of the choke and the common mode choke is no longer effective.
For this reason, it is better to isolate the load circuit from ground either by using a DC-DC converter or other methods such as opto-isolation. |
H: Validity of AWADDR once AWREADY is asserted in AXI4-Lite interface
This question is a follow-up to Relationship between AWVALID and WVALID in AXI4-Lite interface, from which I learned that AWVALID and WVALID have no specific timing relationship.
Now I have a slightly different question. If AWREADY is asserted in response to AWVALID, is there any guarantee that AWADDR will stay asserted till WREADY, BVALID, or BREADY is asserted? I'm only asking this because I have seen some codes that look at WVALID alone for completing the write operations. They totally neglect AWVALID and keeps AWREADY=1 at all times, with the assumption that AWADDR will be valid. Is this perfectly fine to do?
AI: No. When AWREADY is asserted with AWVALID high, AWVALID must be deasserted on the next clock edge, or the other signals must be updated to reflect the next transaction. With AWVALID deasserted, AWADDR can take on any value. You cannot rely on the value of any of the AW* signals after completing the handshake by asserting AWREADY.
A module that assumes differently will then depend on the specific implementation of whatever component is upstream, something which could lead to hard to find bugs and compatibility issues. I also think it is a violation of the spec to do something like that.
You have two options here: either store the value you need and complete the transfer immediately by asserting AWREADY, or hold off on asserting AWREADY until you no longer need the value. |
H: Question about differential signals and feedback
Let's consider, for example, this circuit:
During lessons, our professor always assumed perfect differential input (two signals with the same dc value and with equal and opposite amplitudes). As a consequence node 1 will be an ac ground due to symmetry and the small signal differential gain can be easily found:
Now the question: when I close this circuit (or in general every circuit with a differential pair stage, which is the input block of an op-amp) with negative feedback, I will not have a perfect differential input, thus I am not allowed to use the previous differential gain (which was actually found under the assumption of differential input). Let's consider for example this basic circuit:
You can see that the non-inverting terminal is fixed to the analog ground, thus it can not change in a differential way with respect to the inverting terminal.
In a similar question I wrote, I've been answered that actually you can always write a couple of signals as the sum of a common mode signal and a differential signal, and since a well-designed op-amp has a common mode gain wihich is much smaller than the differential gain, we can neglect the common mode gain (and thus use only the previous expression for the differential gain). Now I would like to have some hints on how to proceed with the analysis in this case. For example, considering the previous inverting configuration, I tried to decompose the input of the op-amp:
where vx is the voltage at the inverting terminal. Is it correct? How to proceed with the analysis?
Thank you
Edit for the comment:
For the telescopic configuration, the differential gain was found under the hypothesis of differential input signals:
When we close the feedback around it we get:
AI: Brief Background
Suppose you have a linear network which has two input ports with input voltages \$V_1\$ and \$V_2\$ as shown in figure below:
Then, since \$V_1 = \frac{V_1-V_2}{2}+\frac{V_1+V_2}{2}\$ and \$V_2=\frac{V_2-V_1}{2}+\frac{V_1+V_2}{2}\$. Thus we have:
Then you can transform the circuit as shown below:
Here the common mode voltage is: \$V_{cm} = \frac{V_1+V_2}{2}\$ and the differential voltage is: \$\frac{V_{diff}}{2} = \frac{V_1-V_2}{2}\$.
Since the circuit is linear, superposition is valid. So we can say that the total response will be sum of these two.
The first one is the the common-mode circuit and the second one is the differential circuit. Here you can use all the tricks for the differential half and the common-mode half which you may know.
Your Example
The complete circuit for the example you provided will be:
Here the two inputs are: \$V_1=V_{cm}+V_{in}\$ and \$V_2 = V_{cm}\$.
If you use superposition here with \$V_{cm}=0\$, you get the circuit which you have shown in your question. This is the differential part of the circuit.
If you instead make \$V_{in}=0\$, you get the common-mode circuit:
I leave it to you now to analyze it. |
H: Building a byte-addressable memory
I am building a memory module:
32 bits wide,
parallel, and,
byte-addressable.
I did a research and i could not find an memory IC that will suit my needs.
It must be able to:
StoreWord,
StoreHalf and,
StoreByte (RISC-V), with or without an offset.
One suggestion was to build it from four 8bits RAMs. I simulated it in Logisim-Evolution, as suggested masking out unwanted data, but even in the uncomplete state, it seems a little too large.
I plan to build it in real life, so it must have as little components as possible, but i just cannot think of a way how to build it, or how is it made in industry?
Thanks for any help
AI: ... even in the uncomplete state, it seems a little too large.
Nope, looks about right to me. I used to build motherboards for engineering workstations (overgrown PCs) in the 1980s, and this is exactly the sort of thing we had to do.
The right side of your drawing is the actual memory (note that one of your modules has a different size from the others). The left and center sections are what we call "byte steering" logic. This is what you need if your CPU isn't doing this for you internally.
You should see how much fun it is to interface a Motorola 680x0 CPU bus to an Intel (PC/AT) peripheral bus. This ended up being one of the first places we used an FPGA, because of the complexity. |
H: MOSFET voltage drop and LED flickering
I've build a LED strip controller using NodeMCU and a transistor STP16NF06FP according to the schematic below:
LEDs were powered with 24 V power supply and I want to regulate it by changing the voltage using PWM on mcu. So for the simplicity's sake I output 255 on PWM pin D5 and the voltage output is ~16 V instead of ~24V also LEDs are flickering.
When I power NodeMCU from USB instead of voltage stabiliser then it is not flickering, but the voltages remain the same.
How to get rid of flickering and get a higher voltages? Max power output from power supply is 0.5 A and it is enought to power LEDs.
When I mount IFRZ44N instead of STP I get ~16 V but the LEDs barely light.
With TIP120 there is no flickering but the output is ~16.7 V (so still a voltage drop, I wanted to replace to MOSTFET as these should have less voltage drop than NPN).
Here is the code for NodeMCU https://bitbucket.org/mdczaplicki/smarthome/src/master/node_mcu/node_mcu.ino
AI: The STP16NF06FP requires 10V Vgs to drive it reliably fully on. You are giving it 3.3V approximately.
You either need to find a logic-level MOSFET that is rated for 3.3V or lower Vgs or make a gate driver circuit to give more voltage to the gate. |
H: Pins for VME64 Bus arbitration
I have a board that has VME64 core implemented in it and this board communicates with other boards via VME64 bus. All boards share the same backplane motherboard which has J1, J2 connectors on it.
If the following pins on the board are connected as follows:
BG0IN is directly connected with BG0OUT (onboard).
BG1IN is directly connected with BG1OUT (onboard).
BG2IN is directly connected with BG2OUT (onboard).
BG3IN is directly connected with BG3OUT (onboard).
BR0 to BR3: no connection.
IRQ1 to IRQ7: No connection.
I could not understand how will the bus arbitration work if the above pins connections are in place on the board? Is it some fix mode for 'Bus master' or 'Remote terminal' mode?
AI: If a VME card never requests the bus, it would not need to monitor or control the bus-grant daisy-chain.
Thus in that case it can always pass BGin direct to BGout.
A card with only memory or peripherals would be such a card. |
H: KiCad where is the priority setting of filled zones?
The documentation shows a dialog for filled zones in which you can set up the priority of the zone.
I'm using KiCad 5.1.2 and I can't find this setting anywhere. Here's a screenshot of the filled zone dialog with 5.1.2:
It seems that the documentation is from 2017 which might be a bit old.
Where can I find this info?
AI: If it were a snake, it would have bitten you :) |
H: Finite State Machine and Reset Signal
let's consider a certain finite state machine, for instance a Mealy Machine:
I was told that it cannot work properly in absence of a reset signal (for the State Register), since we would not know the initial state of the device at the moment in which it is switched on.
But I do not understand clearly this concept. In fact, a Mealy machine may be described by an ASM diagram, for instance this one:
Let's consider the instant in which it is switched on: why cannot it work properly without a reset button?
If it is not present, I think that the machine will follow anyway the related ASM diagram correctly, and the difference is simply that it may start from another state (for instance from s1 instead of s0).
AI: You need to remember that the FSM needs to be built with actual physical components (even if it is in an FPGA). An FSM needs memory to keep track of the states, most of the time these memory elements are flip-flops. Upon power up, a flip flop can choose one of three states: High, Low or in-between (metastablity which means both transistors are on and it's a really bad state to be in).
The first reason the state machine needs to be reset is to prevent metastablity. Many FPGA's do this with an asyncronus reset.
The second reason is to get the state machine into a pre-determined state. For many applications, having a state machine starting at a random point is unacceptable. |
H: How to simulate simple on/off, Single-Pole-Single-Throw switch?
I am trying to simulate ON, OFF switch in this simple circuit:
V1 0 1 1.5v
Rfan1 3 4 1
S1 1 4 SW ON
Rwire1 6 5 0.00001
VV1 0 2 dc 0
VRfan1 3 5 dc 0
VRwire1 2 6 dc 0
.dc V1 1.5 1.5 1
.print dc i(VV1) i(VRfan1) i(VRwire1)
.end
but I got this error:
ngspice stopped due to error, no simulation run!
the circuit I am trying to simulate is like this one:
if I replace the switch with a resistor, the circuit will work normally.
any idea?
EDIT
I have fixed my switch as the following:
S1 1 4 98 99 mySwitch ON
.model mySwitch SW vt=0 vh=1 ron=1n roff=10k
AI: I get the error:
Error on line 3 :
s1 1 4 sw on
Unable to find definition of model - default assumed
The definition of your switch is incorrect.
It should be of the type SXXXXXXX N+ N- NC+ NC- MODEL <ON><OFF>.
You should:
define the controls NC+ and NC-
define the model (which has the name reserved name SW in your case) by using either .model mySwitch CSW(...) or .model mySwitch CSW(...)
not using the name SW as it is reserved as said above (I think that's why it crashes).
So, using the fictional nodes 98 and 99 that control the switch:
s1 1 4 98 99 mySwitch on
.model mySwitch SW vt=1 vh=0.2 ron=10u roff=1G
Do read the manual, section 3.2.14 Switches and 3.2.15 Switch Model (SW/CSW).
In addition to Sorenp's answer, you can also implement the switch by making the value of Rwire1 conditional:
Rwire1 6 5 R={if(time<2,1G,if(time>3,1G,10u))}
Between time is 2 seconds and 3 seconds, Rwire1 has its original value. Before and beyond these time stemps the resistance is 1 GΩ. |
H: SystemVerilog FSM not working correctly
So, I was trying to create an FSM for my module which supposed to control five sensors. I thought I did it, but when I synthesized it, I saw that the code is synthesized as bunch of buffers and nothing else.
I think I do not understand what FSM can or can't do but I really couldn't find the problem.
module RX_to_TX(
input logic i_clk, i_reset,
input logic[7:0] i_from_Rx ,
input logic[7:0] i_Dev1 , i_distance_Dev1,
input logic[7:0] i_Dev2 , i_distance_Dev2,
input logic[7:0] i_Dev3 , i_distance_Dev3,
input logic[7:0] i_Dev4 , i_distance_Dev4,
input logic[7:0] i_Dev5 , i_distance_Dev5,
output logic[7:0] o_Tx ,
output logic o_Tx_DV
);
logic[7:0] next_Tx;
logic[1:0] curState , next_State , DistanceState , StopState;
always_ff @(posedge i_clk )
begin
if ( i_reset == 1'b1)
begin
o_Tx <= 0;
curState <= DistanceState;
end
else
begin
o_Tx <= next_Tx;
curState <= next_State;
end
end
always_comb
begin
next_Tx = o_Tx;
case(curState)
DistanceState:
begin
if( i_Dev1 == i_from_Rx )
o_Tx = i_distance_Dev1;
if( i_Dev2 == i_from_Rx )
o_Tx = i_distance_Dev2;
if( i_Dev3 == i_from_Rx)
o_Tx = i_distance_Dev3;
if( i_Dev4 == i_from_Rx )
o_Tx = i_distance_Dev4;
if (i_Dev5 == i_from_Rx)
o_Tx = i_distance_Dev5;
next_State = StopState;
o_Tx_DV = 1'b1;
end
StopState:
begin
o_Tx = 8'b11111111;
o_Tx_DV = 1'b0;
end
endcase
end
endmodule
AI: Ok, I figured out my stupid mistake,
I changed the o_Tx to next_Tx in the nested case statement and it worked. |
H: If you wire together two TTL outputs, which one wins?
If you connect together two outputs from 7400 NAND gates, which output will dominate - high or low?
Background: I'm reverse-engineering a 1969 circuit board, and in multiple cases they tie together two outputs from 7400 NAND gates, presumably to make a wired-OR. This would be reasonable if they used open-collector chips, but these are SN7400N chips, so it seems a bit sketchy. (In other places, they leave an input floating, presumably to high, which also seems sketchy.)
AI: The low one wins, TTL outouts have a stronger pull down than their pull up.
TTL inputs are current sources so they float high naturally.
when high there's 130 ohms and a diode between VCC and the output pin.
that's going to limit the high-state output current to about 25mA
Combining outputs in this way will hurt performance, the device will produce more heat and probably be slower than if you used actual and gates to combine their outputs, bit if the main goal is to use fewer parts this can be a win. |
H: Moving Door on Electrical Box Alters Signal
I have an experimental setup that I am building to, among other things, measure the flow rate of water through a tube. The flow rate is measured by a sensor that is powered by a 24 VDC source and provides a 4-20 mA DC signal. I am currently seeing an interesting issue where there is some amount of noise in the signal, but when I open the door of the electrical box by about an inch (the point to which it opens naturally when unlocked), the amount of noise drops by about a factor of 2.
Possibly relevant information (I am not an electrical engineer, so some of this may be irrelevant):
The electrical box is made of painted stainless steel
The 24 VDC power supply has a maximum current of 2.1 A (so a power rating of 50 W)
The flow rate signal is generated by an FTI LinearLink LA-5-C-MAF
The current is being detected by a NI cDAQ 9154 with a NI 9203 current card
I am using DIN blocks to make my electrical connections
This is what the electrical box looks like
What could be causing the change in noise when the door on the electrical box is opened or closed?
AI: the door is a reflector, if there is electrical noise produced inside the box closing the door will somewhat prevent its escape and increase noise levels inside the box
Try fitting ferrite noise suppresors on the power wires running in and the DC wires running out of the power supply, (put both wires through each)
alternatively you could add a window in the side of the box to let the noise out. |
H: What is this schematic symbol? Transistor ? Variable Inductor
I was looking through an old schematic and found two symbols that I didn't recognize:
Is this a PNP transistor? looking up the model number doesn't give much information.
Is this some kind of variable resistor or variable inductor?
AI: Is this a PNP transistor? looking up the model number doesn't give much
Building up on the comment of Harry Svensson and jonk, this is a mesa PNP transistor. The MESA technique, in the early days of the transistor, was a technique developed for improve the (then poor) HF response of the devices by removing those parts of the base region which, for their geometric structure, do not improve the \$\beta\$ current gain and rise too much the stored base charge \$Q_{bb}\$ and the base-collector capacitance \$C_{bc}\$, raising the switching time and lowering the cut-off frequency of the device, resulting in its general slowing down. The technique consist of etching of the semiconductor around the emitter and the base contacts: this creates a sort of plateau respect to the collector region on the wafer around these contacts, and the Spanish word for this is "mesa".
Is this some kind of variable resistor or variable inductor?
This is precisely an analog delay line: it is a network which, within given frequency range and reasonable waveform distortion, produces at its output(s) a delayed version of its input signal, i.e.
$$
v_o(t)=v_i(t-t_D)
$$
where \$t_D\$ is the characteristic delay of the line. The model shown seems to be a multiple tap delay line i.e. a delay line offering \$n\$ outputs delayed respect to the input by increasing delay times, i.e.
$$
\begin{split}
v_{o1}(t)&=v_i(t-t_D)\\
v_{o2}(t)&=v_i(t-t_{D1})=v_i(t-(t_{D1}+t_{D2}))\\
v_{o3}(t)&=v_i(t-t_{D1})=v_i(t-(t_{D1}+t_{D2}))\\
\vdots\quad & \qquad\qquad\qquad\vdots\\
v_{oN}(t)&=v_i(t-t_{DN})=v_i\left(t-\sum_{i=1}^Nt_{Di}\right)\\
\vdots\quad & \qquad\qquad\qquad\vdots\\
\end{split}
$$
In the case under examination, \$DE1\$ seems a 4-tap delay line where each tap adds a \$50\mathrm{ns}\$ delay respect to the preceding one. |
H: How does an accelerometer interpret g as ( +9.8 or -9.8 )
I am working with MPU6050 (accelerometer + gyroscope). Concerning about the accelerometer;
Do these sensors(accelerometers in general) uses g as 9.8 m/s^2 or - 9.8 m/s^2 ? (Is there a standard/convention about this)
When I accelerate it in +x direction for example, it gives me a positive multiple of g, so I conclude that g should have be taken as positive.
But wanted to be sure by asking.
You can also clarify for me that whether it measures gravitational acceleration or the acceleration because of the normal force while standing horizontal(xy plane is level) on a flat surface for example.
Thanks.
Edit : I don't think this as a duplicate of my other question, this
question focuses on one thing, the accelerometer outputs are designed
considering g = + 9.8 m/s^2 or - 9.8 m/s^2.
AI: Think of the accelerometer as a spring and the displacement as the acceleration value (this is actually how it is done, kind of).
When you accelerate there is a force on the spring and from the displacement you can measure the magnitude of the acceleration. You need three springs, one in each direction to have a 3D accelerometer. Now we just agree that when the acceleration is to +X direction the value is positive and the same for Y and Z. We also agree that +Z direction is towards the sky.
Now lets focus on the Z axis spring. What happens to it when it is on free fall? Only the gravitational force applies, but it has the same affect on each atom of the spring, so the spring is not displaced. What happens when it is in rest? The gravitational force and the counter force, earth pushing upwards, cause a displacement. The displacement is to the same direction as when you accelerate towards the sky, which we agreed to be positive. Thus you get a positive 1g when standing still. In other words, the accelerometer is ignorant to the gravitational force. In rest it can only measure the counter force, which is upwards. |
H: Negative Feedback Factor
As of now most of the negative feedback systems I have encountered have a feedback factor that is less than 1 but positive. However in some situations it seems the system can behave as positive feedback if the feedback factor is not negative.. I just don't seem to comprehend how this negative attenuation is accomplished just using resistive networks.. Ones I have used till now in my studies.
AI: I just don't seem to comprehend how this negative attenuation is
accomplished just using resistive networks
There are some reasons why this can happen, and in fact happen all the time:
The output signal polarity may be reversed (or have enough phase shift) due to the plant transfer function. When you take a sample of this signal you can feed it back to the input directly because it already is negative feedback.
If the plant transfer function doesn't reverse polarity, then the injection point of the feedback at the input may reverse it instead, thus having the same effect. That's the reason why we use the negative input of opamps for feeding back the output, for example.
These situations are effectively equivalent to what you call "negative attenuation" and are used extensively in amplifier design, etc. |
H: Rotary switch to turn LEDs on cumulatively
Problem:
I have a 12 position rotary switch and 11 LEDs that I want to switch on one after the other until they are all on (on pos 12).
Limitations:
It would be easy to use a μC, but I wanted to keep this simple and the part count as low as possible
Things I tried:
In my head I planned on using a diode for each pin of the switch to connect it to the previous (works for about 2 leds) but forgot about the diode voltage drop that adds up.
And using a diode for every connection (about 66) is just to much of a mess…
@jonk explains this approach perfectly in his answer below.
Question:
Any elegant ideas what I could do to archive that?
I'm a beginner and probably miss something :)
Other ideas
I found cumulative rotary switches but they seem to be quite rare and I did not find one with 12 positions
Using an incremental rotary encoder to feed a two shift registers instead (would this work? / might be easier to cave and use a μC)
AI: This answer was written before the OP commented that s/he is using a 3.2 V supply.
simulate this circuit – Schematic created using CircuitLab
Figure 1. The simplest option if a high enough voltage supply is available. With SW8 closed D1 to D7 will light.
simulate this circuit
Figure 2. For lower supply voltages the chain can be split. In this case SW8 being closed lights D7 but also provides a ground to light all the LEDs in the upper chain.
You can further refine Figure 2 for lower supply voltages but would require more and more diodes.
Figure 3. A single-pole 12-way switch will suffice.
Constant current sources (by me):
AL5809 constant current driver.
Simple constant current driver. (This is in the negative line.) |
H: Data Lines on VME64 Bus
I am learning about the VME64 bus. It has only 16-bits of data bus on its P1 connector while no data pins on P2 connector.
Does it means that VME64 is actually a 16-bit bus?
AI: Not sure where you took this information from...
Connector P2 has additional address pins A24...A31 and additional data pins D16...D31.
That is, P1 and P2 together offer 32 address and 32 data lines.
So, is VME64 just a 32bit bus?
No, indeed, it can address and transfer 64bit by multiplexing the available 32+32=64 lines: First it uses all lines to specify an address, then it uses the same 64 lines to transfer data. Of course, this makes sense only when transferring bulk data. |
H: 555 timer that does not output low when reset
I'm trying to design a circuit with the following goals:
When not active, output can be either Vcc or floating
When triggered by pressing a button, output pulses several times to GND at some frequency F for a duration of N seconds
After the N second duration, the pulsing stops
My first thought was to use two 555 timers: one configured as a monostable timer with period N, triggered by the button. The output of the monostable timer is connected to RESET pin on a second 555 configured as an oscillator with frequency F. The output of this timer would be the desired output.
This design achieves goals #2 and #3 above. When triggered, the output of the monostable timer drives the oscillator's RESET high causing it to pulse for the duration N. However before the circuit has been triggered and after it is finished, the output from the oscillator is driven to GND by the second 555. This is undesired, since in my application I am pulsing an active-low device. I would like to understand if there is some way to pull the output of this circuit to either Vcc or have it floating, when not pulsing.
AI: You just need to use the output of the 555 to drive the base (through a resistor) of an open-collector NPN transistor. Something like this:
simulate this circuit – Schematic created using CircuitLab
In this case, I've chosen R1 to drive about 1 mA into Q1's base, assuming a Vcc of 5 V, and the need to sink less than about 50 mA. If your values are different, adjust accordingly. |
H: Why/when is AC-DC-AC conversion superior to direct AC-AC conversion?
I am currently studying wind power and the power electronics used for it. In wind power a generator is driven by wind, thus the resulting power is of widely varying frequency and amplitude. The power grid, in turn, has strict requirements for the input power in terms of frequency, phaseshift and sinusoidal form. For this reason, power converters are today used routinely in wind power.
The predominant way to get the power into the grid is to use an AC-DC converter followed by a DC-DC converter and a DC-AC converter. This seems rather complicated instead of using a single direct AC-AC converter. Why is the indirect conversion via the DC "in-between" route preferable?
(This is actually a repost from Engineering, since I only found out later that there is a more active, thematically fitting, non-beta Electrical Engineering.)
AI: There is a type of converter which can do this: the matrix converter.
In theory it can take many phases in and produce many phases out at quite a wide range of frequencies. It also has the additional benefit of not needing any power passives (in theory), or no large capacitor, no large inductors.
However, there are two golden rules with matrix converters
Thou shalt not short circuit the supply
Thou shalt not open-circuit the load
It is point #2 that makes the topology impractical as a simple loss of power will cause the inverter to blow up.
There is a variant of the matrix converter called the cycloconverter which uses thyristors and does not suffer the same issues as a full matrix converter. It, however, has a limitation of only being able to synthesise an output frequency around 1/10th of the input frequency. This limitation is fine for marine which typically uses 400Hz electrical supplies so generating 40Hz isn't too limiting for propulsion
So why AC-DC-AC instead of direct AC-AC ... The complications and limitations. A six switch inverter is extremely versatile. |
H: Unable to access the module error while trying to simulate a simple PLC Lad program
I am learning PLC and trying to simulate a simple program on Schematic Manager. But I'm having this error that says Unable to access the module CPU314c-2DP via the online interface. How can I solve it?
AI: One "trick" that often works is to type some of the exact error message into a search engine (with quotes around it) and see what comes up. In your case I find this in the Siemens PLC forum:
"I go to Options->Set PG/PC interface and select PC Adapter(MPI)...After that, PLCSIM going to work... " |
H: uC-adjustable harmonic oscillator circuit @100W
I'm looking for a circuit that performs a harmonic oscillation while its resonance frequency can be controlled with a microcontroller (preferably via an analog signal). The circuit has to drive a set of ultrasonic transducers with a maximum power of 100 W at ~ 50 V and a maximum frequency of 100 kHz.
Till now I'm using a h-bridge circuit for my low-power setup (using the 0D24K2 transducers) which dissipates most of the power by (dis-)charging the transducers' capacitance. I can't find any datasheet, but it might be similar to When I now increase power and frequency, power dissipation will increase as well. Another disadvantage is that driving the transducers with a square wave generates unwanted higher harmonics, but I'd like to have a signal with a pure spectrum.
Some solutions that came to my mind are using:
a LC resonant circuit, but right now I'm unsure how to control its frequency with a microcontroller.
an oscillating circuit with operational amplifiers. I guess it might be difficult to generate the needed power and frequency at acceptable cost.
AI: There’s a good reason why piezo type transducers are driven brutally with a square wave and that is because response and decay times are minimised. So, if you don’t care much about the attack and decay times of energy you are trying to transmit then incorporate the device into an RLC circuit where it (the device) forms the bulk of the C element.
If you want fast response and an unambiguous off state then be brutal and drive with a square wave. |
H: Why can't I get correct reactive and real power in what seems to be easy task
As the title says Why can't I get correct real and reactive power
The task says:
Source with voltage \$U = 100\angle{30} \$ is plugged on to impedance \$ Z = 3 + 4j \$. Determine real, reactive and apparent power.
what I did was:
I find apparent power: \$ \frac{U^2}{Z} = 2000\angle{6.87 } W \$
And to find real power i tried to multiply cos(6.87) with 2000 and I get \$ 1985.65 \$ but the answer supposed to be \$ 1200 \$. What have I did wrong isn't the real power equals cosφ * apparent power?
AI: The angle of 30 degrees associated with your voltage supply is a complete red herring. Your voltage supply is 100 volts period because your load cares not one bit about the phase angle. All the load sees is 100 volts RMS.
So, without the safety net of a calculator, I can see the load impedance is 5 ohm and this takes 20 amps. That 20 amps flows through the resistive element of the load of 3 ohms and therefore dissipates 400 x 3 watts. |
H: Negative Voltage Output From Capacitor
Circuit Link
I am very confused about negative voltage in capacitor, it can even light a led as you can see in the circuit.
My questions are;
1-Why capacitor gives ~-2 V in its negative terminal when we tie its positive terminal to ground?
2-How can that LED work by connecting its positive to battery's negative pole and its negative to our negative voltage?
I have been searching this for long time, what i found about first question is this site's "Some Capacitor Theory", why capacitor acts like this?
May you please explain it as simple as you can, because i am interested with electronics as hobby, i really don't understand clearly when it is explained in formulas,mathematics etc
Circuit Image:
Thank you very much.
AI: The effect you are seeing is used for ‘flying capacitor’ voltage converters. Basically, the capacitor is storing charge in one polarity, then re-connnected in a different polarity to discharge.
This same principle is used in a now-obsolete LED flasher chip called the LM3909. At the high level, the LM3909 resembles your circuit, except it does something clever: connects the LED across the (-) flying cap to (+) battery, doubling the voltage across the LED. Its ability to run on very low battery for long stretches of time is the stuff of legend.
BONUS: an analysis of the LM3909 here: https://cdn.hackaday.io/files/291791248394336/Discrete%20Version%20Of%20The%20LM3909%20Oscillator%20IC.pdf |
H: stm32f103 ADC strange behaviour
I am using stm32f103c8t6 discovery board and I was doing a practise by using ADC peripherals. While using keil debugger I have realized something so strange to me.If discovery board is not connected to voltage source or ground, It is giving like 1900-2200.
The adc is configured as 12 bit read mode.It came so strange to me. I will leave my code ıf you want to check it.
#include "main.h"
#include "stm32f1xx_hal.h"
ADC_HandleTypeDef hadc1;
void SystemClock_Config(void);
static void MX_GPIO_Init(void);
static void MX_ADC1_Init(void);
uint16_t adc_voltage;
int main(void)
{
HAL_Init();
SystemClock_Config();
MX_GPIO_Init();
MX_ADC1_Init();
while (1)
{
HAL_ADC_Start(&hadc1);
while(HAL_ADC_PollForConversion(&hadc1,500));
adc_voltage = HAL_ADC_GetValue(&hadc1);
HAL_ADC_Stop(&hadc1);
HAL_Delay(500);
}
}
void SystemClock_Config(void)
{
RCC_OscInitTypeDef RCC_OscInitStruct;
RCC_ClkInitTypeDef RCC_ClkInitStruct;
RCC_PeriphCLKInitTypeDef PeriphClkInit;
RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_HSI;
RCC_OscInitStruct.HSIState = RCC_HSI_ON;
RCC_OscInitStruct.HSICalibrationValue = 16;
RCC_OscInitStruct.PLL.PLLState = RCC_PLL_NONE;
if (HAL_RCC_OscConfig(&RCC_OscInitStruct) != HAL_OK)
{
_Error_Handler(__FILE__, __LINE__);
}
RCC_ClkInitStruct.ClockType = RCC_CLOCKTYPE_HCLK|RCC_CLOCKTYPE_SYSCLK|RCC_CLOCKTYPE_PCLK1|RCC_CLOCKTYPE_PCLK2;
RCC_ClkInitStruct.SYSCLKSource = RCC_SYSCLKSOURCE_HSI;
RCC_ClkInitStruct.AHBCLKDivider = RCC_SYSCLK_DIV8;
RCC_ClkInitStruct.APB1CLKDivider = RCC_HCLK_DIV1;
RCC_ClkInitStruct.APB2CLKDivider = RCC_HCLK_DIV1;
if (HAL_RCC_ClockConfig(&RCC_ClkInitStruct, FLASH_LATENCY_0) != HAL_OK)
{
_Error_Handler(__FILE__, __LINE__);
}
PeriphClkInit.PeriphClockSelection = RCC_PERIPHCLK_ADC;
PeriphClkInit.AdcClockSelection = RCC_ADCPCLK2_DIV8;
if (HAL_RCCEx_PeriphCLKConfig(&PeriphClkInit) != HAL_OK)
{
_Error_Handler(__FILE__, __LINE__);
}
HAL_SYSTICK_Config(HAL_RCC_GetHCLKFreq()/1000);
HAL_SYSTICK_CLKSourceConfig(SYSTICK_CLKSOURCE_HCLK);
HAL_NVIC_SetPriority(SysTick_IRQn, 0, 0);
}
static void MX_ADC1_Init(void)
{
ADC_ChannelConfTypeDef sConfig;
hadc1.Instance = ADC1;
hadc1.Init.ScanConvMode = ADC_SCAN_ENABLE;
hadc1.Init.ContinuousConvMode = ENABLE;
hadc1.Init.DiscontinuousConvMode = DISABLE;
hadc1.Init.ExternalTrigConv = ADC_SOFTWARE_START;
hadc1.Init.DataAlign = ADC_DATAALIGN_RIGHT;
hadc1.Init.NbrOfConversion = 1;
HAL_ADC_Init(&hadc1);
sConfig.Channel = ADC_CHANNEL_0;
sConfig.Rank = ADC_REGULAR_RANK_1;
sConfig.SamplingTime = ADC_SAMPLETIME_41CYCLES_5;
HAL_ADC_ConfigChannel(&hadc1, &sConfig);
}
static void MX_GPIO_Init(void)
{
GPIO_InitTypeDef GPIO_InitStruct;
__HAL_RCC_GPIOC_CLK_ENABLE();
__HAL_RCC_GPIOD_CLK_ENABLE();
__HAL_RCC_GPIOA_CLK_ENABLE();
HAL_GPIO_WritePin(GPIOC, GPIO_PIN_13, GPIO_PIN_RESET);
GPIO_InitStruct.Pin = GPIO_PIN_13;
GPIO_InitStruct.Mode = GPIO_MODE_OUTPUT_PP;
GPIO_InitStruct.Pull = GPIO_NOPULL;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW;
HAL_GPIO_Init(GPIOC, &GPIO_InitStruct);
}
void _Error_Handler(char *file, int line)
{
while(1)
{
}
}
#ifdef USE_FULL_ASSERT
void assert_failed(uint8_t* file, uint32_t line)
{
}
#endif
AI: I think you are mentioning the case of a so called floating input. When there is nothing connected to an input (either digitally or analog), the value you get is indetermined, and it is instable (meaning it can change for every reading you make). Always make sure you have something connected that you want to read to an input, AND in cases the input is not driven by an output (in case of an 'uninitialized' output connected to the input, or in case the MCU starts up), use a pull up/pull down resistor (some MCUs have one or both of them already built in and you can select it when you determine the pin properties). |
H: LTC3112: Limited Output Current
So for the past couple of days, I was testing out the LTC3112 Buck-Boost IC from Linear Tech. I'm using this for my heated glove project to step up/step down the voltage across the heating element. Using the typical application schematic for my test schematic with a 5V input (while changing the inductor from 4.7uH to 10uH), I tested the output voltage on an LED with a 1K resistor, and it was outputting 5V with appropriate current (of course). But when I switched to a piece of my conductive fabric (at 14 Ohms) as a load, the IC got damaged somehow and now is drawing 300mA - 400mA on the input side of my power supply even when there is no load attached and I am no longer getting a 5V output. Compared to a current draw of 0.027A when the IC was properly working.
I'm just trying to figure out what went wrong and why the IC failed me. The PWM/SYNC, Run, and Vin pin all received 5V. Unless my load somehow shorted itself (which I don't believe to be true), this IC should have been capable of outputting the appropriate current. I honestly don't know what the issue is at this point.
https://www.analog.com/media/en/technical-documentation/data-sheets/3112fd.pdf
The yellow wire I'm point to/touching is Vout with no load connected.
AI: Assuming that you went through all of the proper component selecton (for a DC to DC converter, all the components must be selected by guidelines listed in the datasheet) (you cant use any inductor or capacitor you want)
With a setup like that, the IC probably is suffering from an electrostatic discharge. Carpet is no place to troubleshoot circuits. The IC can support 1A at 5V so it's most likely not the heater. Replace the IC, find a bench and possibly an ESD mat with a strap. Then slowly 'ramp' up the load with lower and lower value resistors.
Another problem that you may be facing is the poor parasitics of the breadboard.
The ESR of the inductor will be much higher with the contact resistance of the breadboard. With many DC to DC designs, the making the traces that are 10's of mills can create problems with a design (milliohms make a difference).
The resistance on the pins plus the contact resistance is probably causing issues also. If you get this up and running again check the voltage bounce on the ground pin of the regulator before going to full current with an oscilloscope.
Also the loop will not be properly compensated as there is roughly 20pF of capacitance between rows, this could also cause problems with the compensation which could lead to instability in the feedback loop of the buck converter.
Don't use a breadboard with DC to DC's, use perfboard and soldering or wirewrapping. Use the PCB layout guidelines (even when soldering) |
H: Mosfet on or off
When do we say that a nMOS or pMOS is "on" or "off"? Are there any voltages across terminals we should be looking for?
AI: For MOS, the voltage from gate to source, Vgs, controls if it is on or off. PMOS turns on when gate is lower than source by the threshold voltage, Vgs(th).
Let's say Vgs(th) = -2V for a particular P-channel MOSFET. Then whenever the voltage from gate to source is -2V or below, then the MOSFET is on.
For NMOS, Vgs(th) will be positive. Let's say it is +2V. Then whenever Vgs is > 2V, the N-channel MOSFET will be on.
Vgs(th) will vary and will be listed in the datasheet. Generally, in order to really turn the MOSFET on strongly, you will want to exceed Vgs(th) by a good amount to make sure you are solidly in the "on" region.
I would be remiss if I didn't mention the body diode at this point. There is an intrinsic diode built into MOSFET's. Even when it is off, PMOS allows current to flow (with one diode drop) from drain to source. And NMOS allows current to flow (with one diode drop) from source to drain. |
H: What is the purpose of the optocoupler in this circuit to drive the mosfet/relay?
I'm currently trying to understand a circuit for an ESP8266 relay board. As you can see in the schematics below, an optocoupler is used to drive the relay mosfet. I do not understand, why they are using an optocoupler for the control signal.
As far as I know an optocoupler is used to galvanically isolate two circuits from each other, to make absolutely sure, that no voltage spikes, EM-noise, ... will be induced into the microcontroller.
But in that case it doesn't make sense, because the relay is using one common GND and 3.3V Vcc, so why isolating the control signal?
Do I miss something, why the optocoupler is still a good idea?
Thank you very much!
AI: This circuit appears to a modified version of other similar boards which have the option of an isolated relay power supply.
My guess is that the optocoupler was left in for compatibility. The FET could be driven by GPIO0 directly, but then the signal would have to be high to activate the relay. The optocoupler acts as inverter, pulling the FET Gate high when GPIO0 is low.
By not changing the circuit the designer can be assured that it will work the same as those other boards, and users don't need to load a special version of the driver software into the ESP8266. |
H: Mosfet Threshold Voltage
from MOS theory we know that, in case of a P substrate, a Gate-Bulk voltage higher than a certain threshold value creates an inversion layer, in this case made of negative charges. This is also described in wikipedia (https://en.m.wikipedia.org/wiki/MOSFET).
But we always consider as threshold voltage that between Gate and Source, and this seems in contrast with the previous description. The answer cannot be (as someone told me) "Source and Bulk terminals are often connected together", there is a deeper analysis.
For instance, let's consider a pass transistor circuit made with a N-channel MOSFET (enter link description here).
The input signal is sent on the drain and the output signal is taken at the Source. Its maximum value is VDD - Vthreshold since from that value, the Vgs voltage would be lower than Vthreshold, as we know from theory. From this analysis we understand that the voltage that switches on a NMOSFET is that between Gate and Source. And here the Source is not connected to Bulk, which is at GND. This does not agree with MOS theory.
So, which is the voltage that switches on a NMOSFET?
AI: You should know that the electrons for the inversion layer, formed under the gate, are coming from the source terminal of the MOSFET consequently, it is the \$V_{GS}\$ that turns an n-MOS on. But the potential barrier between the source and the bulk depends on the bulk potential. Thus the threshold voltage itself depends on it (bulk potential), known as body effect. You can read more about it here:https://en.wikipedia.org/wiki/Threshold_voltage |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.