text stringlengths 83 79.5k |
|---|
H: MAX5725 SPI mode
I'm reading the datasheet of MAX5725 but I cannot understand which SPI mode it uses.
At page 18 I read:
[...] the data is shifted in synchronously and latched into the input register on each falling edge of the serial clock input (SCLK).
This limits the SPI mode to 1 (clock idle low, sampling on second edge) or 2 (clock idle high, sampling on first edge).
I also looked at Figure 1 where there are the SPI signals, but unfortunately it's not so clear the idle state of the clock signal. Following this figure both modes would fit.
From which other parameter should I understand the actual SPI mode it uses?
AI: It can use either mode. When writing to the chip, it does not matter if clock idles low or high, the falling edges still load data in. For reading, the chip allows you to specify which clock edge you want to use for reading. |
H: SMPS load frequency to switching frequency
If the frequency of the load exceeds the SMPS switching frequency, what are the results?
AI: If the frequency of the load exceeds the SMPS switching frequency, what are the results?
If what you mean is the frequency of current drawn by the load...
Like any regulator, your SMPS controls its output impedance via feedback up to a certain frequency that is lower than the switching frequency. How much lower depends on the design. Above that, its output impedance is determined by the output capacitors.
So the regulator/SMPS handles the load current up to the frequency where output caps take over. |
H: SparkFun Power Driver Shield Kit for controlling 60 V input
I just bought SparkFun Power Driver Shield Kit. I read in the specifications that it was possible to control up to 60 V using PWM. I have got a project where I need to control 60 V with the PWM so I bought this card but got a bit confused when I got it. I thought it simply would be possible to connect my 60 V output into the card and controlling the output with PWM. but I got confused where it says 12 V and 5 V.
Is it not possible to use higher voltage then 12 V with the card? My idea was to use it basically as a dimmer, controlling voltage between 0 - 60 V with the help of PWM.
Did I misunderstand something about the concept?
AI: The "5V" and "12V" notation refers to using a standard PC power supply to run the project, via the 24-pin ATX connector. See the schematic here.
If you omit D1, LED1 and LED2 from your assembly, you can apply whatever voltage you like to the "5V" and "12V" pins of the 24-pin ATX connector. |
H: solder/join single copper filament to contacts (speaker tweeter)
As a result of the careful shipping of UPS, my speakers arrived with broken tweeters. One tweeter in particular has had its connections severed when the plastic support broke. I need to re-connect the filaments.
In this picture the two single filaments coming out of the tweeter are visible.
Those filaments need to be connected back to the contacts' plate that broke off (the plastic plate itself has since been reattached with glue):
There is about 1 cm gap from when the filaments are broken to the metal connection on the plate.
I have been told that the filaments were covered in enamel, and to scrape it off with a scalpel. I tried, as best as I could; hopefully I managed to remove the enamel but I am not sure. The filaments are so small I am worried they would break if I scrape more.
I have a small soldering iron, like this: https://www.clasohlson.com/no/Cocraft-loddepenn-HS-30L/p/41-1364
What would be the best way to do this now:
Bridge the filaments to the metal connections using a filament from electrical wire
Bridge it by making a strip of soldering tin
other suggestions?
Thank you
AI: Other: Rebuild the broken terminal piece, but place it closer to the existing wire ends. I guess you can fasten it well with epoxy. Solder the existing wire ends. Insert a couple of drops of some acid free permanently soft silicone material which dampen vibrations so that your new soldering joints do not enjoy any bending which could come along the wires.
The thickness of the dampening material around the filaments should grow slowly and gradually towards the soldered area to be effective. |
H: In semiconductor physics, how can we know which units to use for Boltzmann's constant?
I'm working through Example 3.9 from Sima Dimitrijev's Principles of Semiconductor Devices textbook, and I'm not sure how to know which units to use for k in solving for \$v_{th}\$ in part a. The result there is obtainable using the Boltzmann constant k with units in J/K. So far in this textbook we've been using k with units in eV/K. I need insight into knowing which version of k to use, as both versions lead to a final answer with the same units.
When I divide eV/J in Wolfram|Alpha I get a unitless number \$1.602 \times 10^{-19}\$, which I recognize as the value of electron charge with units in Coulombs. I feel like this is key to understanding my question but I'm not quite there. Any help is greatly appreciated!
AI: As I said in the comments, I hate that they don't teach you how to properly handle units in schools...
You know how 1 kJ is 1000 J, or 1 foot is 12 inches? in the same way, 1 J is 6.242·10¹⁸ eV, and 1 eV is 1.602·10⁻¹⁹ (= 1/6.242·10¹⁸) J. The fact that you can convert one to another means they measure the same dimension, but the fact that they aren't equal, that that conversion factor isn't 1, says that they're different units. |
H: How to calculate "mean instant power"?
I have an homework assignement that gives me an headache, I have a circuit with a 2000W heater with a all or nothing regulation using a thermostat
The heater is running 1/2 hour then stop for 1h from 6pm to 6am next day, so I calculated the duty cycle ratio (Time powered/(Time [s] powered + Time not powered) <=> (830)/((830)+(9*60)) which is 0.31.
I need to calculate the mean instant power of that circuit, I went to the conclusion that this value would be nominal power * duty cycle ratio but I'm not sure at all of that
Am I right with my calculation?
AI: "Mean instant" is an oxymoron.
It is 2kW at the instant it is ON and 0 at the instant it is OFF.
But 0.5 h ON and 1 h OFF in a 1.5h cycle for any duration is 1/3 duty cycle so your mean power is 2/3 kW. |
H: Circuit amps vs device amps
I've always struggled with some really basic questions around amperage. Like, if you're plugging a 2 amp device into a wall socket, but that circuit is a 20 amp circuits why doesn't it blow up?
Then I realized, and want to ask this question:
In reference to circuits in a building, like a standard 20a 120v circuit, is that referring to amperage capacity and the amperage on my phone charger is referring to actual amperage drawn? If so, why is it never explained?? I know that is really simple but I've never understood it and wish somebody had said that to me years ago.
Basically, a 20a circuit can provide UP TO 20 amp, and a device draws what it says it requires.
AI: Basically, a 20a circuit can provide UP TO 20 amp, and a device draws what it says it requires.
That's it. You've answered your own question.
To add on an extra layer - that is true for any constant-voltage distribution system; AC, DC, whatever. The way to analyze it is that in such a system, the output voltage of a theoretically perfect voltage source never changes no matter what the load is, and the output impedance of the source is zero ohms, or a very low number. The output impedance is essentially a very small resistor in series with the output of the perfect power source. The complete circuit is the voltage source, its output impedance, and the effective impedance of the load device (light bulb, vacuum cleaner, whatever), all in series to GND.
Thus, the source's output impedance and the load form a voltage divider; the midpoint of the divider is the effective output voltage of the source as seen by the load. With Ohm's Law, you can see that the load impedance must be very low before the voltage it sees begins to sag. This means that the current through the circuit is almost completely dependent on the impedance of the load, from a cell phone charger all the way up to an electric clothes dryer.
There are other types of systems, such as constant current and constant power, but those are not nearly as common, and more complex to analyze. |
H: USB3.x and AC-Coupling Caps on the RX lines
In the USB3.2 Specification (Sept. 22, 2017) Table 6-18 and Table 6-22 give some characteristics for the TX and RX lines.
Basically, I understand that the TX lines have AC-Coupling capacitors to block DC and let just the AC signals pass. How is the min/max (75nF to 265nF) decided? What if my design has 10nF? Could this somehow effect USB3 enumeration?
The spec also mentions that having AC-Coupling on the RX lines is optional (because the device-side already has series capacitors on its Tx lines, which connect to the Host's Rx lines, so it isn't needed for DC blocking).
But it still states you can have up to ~330nF at the Rx side...I don't really get where this comes from.
This note states:
We recommend 330nF capacitors in order to meet the minimum capacitance
requirement (75nF to 265nF) for RX detection.
I don't understand this note, and how having this capacitance can help with Rx detection...(I am working on a USB3 design where we are seeing some enumeration/fallback issues, and wondering if it might have something to do with these capacitance values)
AI: The minimum and maximum capacitancr are decided upon what kind of characteristics the transmitted data has, whether considered in time or frequency domain.
At least 75 nF of capacitance is necessary to pass low enough frequencies, or long bit strings of same bit without transitions so that there is not too much droop. Smaller capacitance will have faster droop and will track common mode or DC bias voltage changes faster.
At at most 265 nF is enough that there is no reason to have higher capacitance, it does not need to pass lower frequencies or longer bit strings that have no transitions. Higher capacitance also takes longer to charge from idle state. It will also be large enough that common mode voltage or DC bias voltage changes slow enough that it is not misinterpreted.
The reason why a 330nF capacitor is suggested is that capacitors have tolerance. A cheap ceramic X7R capacitor typically used for high speed AC coupling might typically have +/- 20% tolerance. As it happens, 80% of 330nF is 264 nF, so they specify a large enough capacitor to include the tolerance of the capacitor.
If you have 10nF capacitors in your design, that is way out of specification, almost a factor of 10 below the minimum specified capacitance and it could easily cause the problems you are seeing. |
H: I like to know the details of how capacitors work?
After all these years I am back to the start position of how capacitors work. I don't know to laugh or to cry.
Okay even before that. When I hook up two wires, one on each terminal of a 12V DC battery and I measure the voltage difference at the end of the wires, it reads 12V. That goes without saying.
Now we learned, at least I seem to have that no current of charges moves at all in an open circuit. So what moved from the battery to the end of the wires?
If I place an ammeter in any of the two wires also before I test the voltage at the end of the two wires, I must not read any current in the ammeter but I assume I would read volts in the voltmeter.
Since I certainly read volts, it must imply that energy ( I had to throw in that word) moves without any charge movement, unless charges actually do move in an open circuit.
Capacitors by actual design is an open circuit no matter how close are the plates. To my understanding, that is the whole idea, not to allow charges to move through.
So you can see what my mind is looking at here.
And capacitors are not like a battery where the plates are connected by the fluid/electrolyte in some respect.
( I realize also that Antennas are basically open circuits....)
Furthermore current of charges is at a maximum in a capacitor and it is described as a short circuit assuming no resistors are there.
Can anyone explain this without using formulas?
Thanks in advance.
AI: Since I certainly read volts, it must imply that energy ( I had to
throw in that word) moves without any charge movement, unless charges
actually do move in an open circuit.
I don't see how the reading of volts implies that energy has moved. (Unless you take the non-finite impedance of the voltmeter into account, but then you must acknowledge that charge has moved)
Capacitors by actual design is an open circuit no matter how close are
the plates. To my understanding, that is the whole idea, not to allow
charges to move through.
In a static situation, capacitors are indeed an open circuit. No doubt about that.
( I realize also that Antennas are basically open circuits....)
Like capacitors, they are open circuits in a static situation. Antennas do not transmit DC.
I think you are confusing static and dynamic behaviour. Consider a capacitor as a reservoir of charge. Each conductor has this property, but capacitors are designed to have more of this property. Likewise, inductors are a reservoir of current (much like a moving object has momentum), but in a static situation an inductor behaves just as a wire.
To put it into different words: a capacitor is something that hinders change of voltage (over the capacitor), and an inductor is something that hinders change of current (through the conductor). |
H: STM32 HAL VS dual 595 driven 7 segment display
I got this arduino shield with four 7 segment display what I'm trying to make work with STM32 nucleo. The displays are driven with two 74HC595 shift registers.
I wrote the following code to control it, but somehow it doesn't work, always shows the same pattern whatever I modify in the array.
int digits[16] = {1,1,1,1,1,0,1,0,1,0,1,1,0,1,1,0};
while (1) {
HAL_GPIO_WritePin(STCP_GPIO_Port, STCP_Pin, GPIO_PIN_RESET);
for(int i=0;i>=15;i--) {
HAL_GPIO_WritePin(SHCP_GPIO_Port, SHCP_Pin, GPIO_PIN_RESET);
if(digits[i]==1) {
HAL_GPIO_WritePin(DS_GPIO_Port, DS_Pin, GPIO_PIN_SET);
}else {
HAL_GPIO_WritePin(DS_GPIO_Port, DS_Pin, GPIO_PIN_RESET);
}
HAL_GPIO_WritePin(SHCP_GPIO_Port, SHCP_Pin, GPIO_PIN_SET);
}
HAL_GPIO_WritePin(STCP_GPIO_Port, STCP_Pin, GPIO_PIN_SET);
}
SHCP: Shift register clock
STCP: Storage register Clock
DS: Data
What could be the problem?
AI: Your for loop has an error. It would try to count backwards from 0 until it is larger than 14. My rough estimate is that it takes about 2^31 loops until the count reaches largest negative number and rolls back to largest positive number before the for loop exits. |
H: Will my deep cycle battery protect itself from full depletion?
I have a 12 V, 55 Ah lead acid deep cycle battery that I am using to power a small cooler/fridge that will hold medicine for a community project that I'm working on. The goal is to provide continuous power to the fridge despite any power outages caused by natural disasters, and there will eventually be a solar system connected to lessen the load on the battery and increase the uptime. The full system will also have a switching module that will automatically detect when the grid (wall outlet) stops working and will switch to this battery (it will also do this in the other direction), like a UPS but much cheaper due to budget constraints.
Now, I am going to do an experiment where I run the 12 V, ~48 Wh cooler directly from the battery and log the voltage vs. time across the terminals so that I can tell how long the battery can power the cooler before it turns off and compare that to the expected result.
I have found plenty of tables from google that showing the depth of discharge percentage vs. battery voltage, and it says it shouldn't drop below 12 V until the battery is about 60% depleted. This is desirable because I do NOT want to completely deplete the battery and reduce its lifespan.
Here is what I am unsure of, the fridge operating manual does not provide any information about the lowest operating voltage. Can I assume that the lowest operating voltage is close to 12 V, and that the fridge will shut off once the battery reaches, say, 11.5 V? If yes, then I am good to go. If not, then what is an average operating voltage range for a system that is designed for 12 V?
(I know that it is likely very dependent on the system, but I am looking for more of an anecdotal answer. If an answer is not obvious, then I can attempt to use a variable power supply in the lab to find out the voltage at which the fridge stops drawing current, though I am asking here because I'm in a bit of a time crunch to assemble the first prototype and would like to be able to run this experiment as soon as possible)
Edit: for reference, here is the only information I could find about the cooler https://secure.img1-fg.wfcdn.com/docresources/0/156/1564815.pdf
AI: Yes deep-discharge batteries can handle many more cycles than 50 for flooded cell lead-acid batteries ( some barely survive a few )
But they still oxidize the plates with a thin coating from the calcium-lead plates over time from under-voltage.
This can be reversed with very fast rise-time pulse charging, but otherwise, they will become high ESR and unable to prove the amps you need. |
H: Input Impedance of the Inverting Op Amp
I'm having a hard time understanding how to calculate input impedance of the inverting op amp.
First, which two points do I need to calculate the impedance between? Is it between the two points indicated by the red line on the picture above?
In this case I don't understand why the impedance is equal to Rin. It seems to me that it should be infinite because the impedance between the op amp inputs is infinite. What am I missing here?
Thank you!
AI: First, which two points do I need to calculate the impedance between? Is it between the two points indicated by the red line on the picture above?
Yes, this is the circuit input impedance (between the left Rin end and ground) that is different from the op-amp input impedance (between its two inputs).
It seems to me that it should be infinite because the impedance between the op amp inputs is infinite.
Really, the op-amp input impedance is infinite... but if the op-amp was standalone. Note that here a network consisting of two elements in series (Rf and the op-amp output) shunts the op-amp differential input (ie, it is connected between them). So this network determines the resistance between the op-amp inputs. Let's see what its resistance is...
The input current flows through Rf and creates a voltage drop VRf = Iin.Rf. To keep the voltage at the inverting input equal to zero (obbeying the so-called "golden rule"), the op-amp adjusts its output voltage equal to the voltage drop across Rf (Vout = -Iin.Rf) and adds it in series. The op-amp output serves as a variable voltage source that copies the voltage drop across Rf and removes it. As a result, the differential voltage between the op-amp inputs is zero (Iin.Rf - Iin.Rf = 0). Thus there is current flowing but there is no voltage... so the resistance of this network (and between the op-amp inputs) is zero... virtual zero. Figuratively speaking, the inputs are short connected by something like a "piece of wire".
So, the conclusion is that the circuit input impedance is determined only by Rin.
The conceptual picture below illustrates my explanations. Pay attention to something very important for understanding the circuit - the four elements (two voltage sources and two resistors) are connected in a loop and the same current flows through them (its trajectory is drawn in green). Also note another very important property of this configuration - the two voltages Vin and Vout have the same polarity when travelling the loop; so they are summed according to KVL.
Try to grasp the idea; if you have any questions, I will be happy to answer. I know it will be a little difficult for you to understand my slightly unconventional explanations... but if you succeed, the benefits will be great for you... You will know what the secret of op-amp inverting circuits is. For example, you can easily answer a similar question.
In the edit below, I have exposed some basics of my philosophy about negative feedback circuits as a response to AnalogKid's updates.
UPDATE 1
… the opamp does whatever it takes to keep its two inputs at the same voltage.
Undisturbed follower. Although it is possible for an op-amp to change the voltages of both its inputs (for example, in an NIC), in most cases it only changes the voltage of its inverting input so that it (always) follows the voltage of its non-inverting input. The latter is permanently zero (in the case of the inverting amplifier) or is initially zero (in the case of the non-inverting amplifier). So, by its nature, the op-amp circuit with negative feedback is a zero voltage follower. Its simplest implementation consists of only one op-amp whose output is connected to its inverting input.
Disturbed follower. From now on, each new element inserted (resistor, capacitor, diode, transistor, etc.) or voltage or current applied (Vin, Iin) acts as a "disturbance" for this initial follower since it tries to change the zero voltage reference. The op-amp reacts to the disturbance to overcome it and we take its reaction as an output. In this way, all possible op amp circuits with negative feedback can be obtained by intentionally disturbing them.
So whatever the current is through Rin, the opamp drives Rf such that an equal but opposite (because it is the ((inverting)) input) current flows into the node, and the voltage at the Rin-Rf node just sits there at 0 V.
I would say: Since the input voltage source pushes a current through R1 into the node but the op-amp draws the same current through R2 from the node (or the input source draws a current via R1 from the node but the op-amp pushes the same current via R2 to the node), the voltage of the node does not change.
From another point of view, this 4-element configuration can be seen as a balanced bridge.
UPDATE 2
Because there is a real, physical delay between the inputs and output of an opamp...
Exactly! Strange as it may seem, it is this delay that makes it possible to explain the circuit operation. If we consider the op-amp as a device without delay (Vout = k.Vin), we fall into a vicious circle.
The input changes; let's say it goes a bit negative...
So, when Vin changes (eg, decreases), in the first moment, Vout does not change… and the voltage divider R1-R2 is driven from the left side by Vin. After a while, the op-amp responds to the change by starting to increase its output voltage… and now the voltage divider R1-R2 is driven from the right side by Vout.
UPDATE 3
But a more pedantically correct term would be "virtual reference potential".
Exactly! "Virtual ground" is a voltage source whose voltage is a "copy" of another (reference) voltage (that can be zero). Figuratively speaking, virtual ground is a clone of another but real ground.
In 2007, I put a lot of effort into finding out what virtual ground really is and telling it on Wikipedia. Here is an old revision of the page and a heated discussion on the talk page. In the end, the page was trimmed and is now in a miserable state (Wikipedia EE is another place where there are terrible people; the only strange thing is how they are allowed to run wild).
COMMENT 1
If the op amp has an open loop gain of 1 million, and your circuit has a gain of 10, then the input signal is effectively attenuated by a factor of 100,000 at the input, and then amplified by 1 million by the device, for a circuit gain of 10.
Original representation of the negative feedback arrangement that I have never seen! I use two approaches to presenting it:
Amplifier. If the amplifier used to build the negative feedback circuit has a relatively small gain, I consider it as an amplifier with some moderate gain; then its input voltage cannot be ignored. This is the case in transistor circuits.
Integrator. If the amplifier used to build the negative feedback circuit has an extremely high gain, I simply consider it as an integrator… and I do not talk about gain at all.This is the case in op-amp circuits. |
H: How can I create feedback measurement with an operational amplifier? 0-20mA measurement
I want to build something like this.
This circuit controls 0-20mA but the circuit have one big problem. If the circuit load e.g 100 Ohm changes to e.g 200 Ohm, then less current is going to flow. To solve that issue, I need to tune in the feedback for the operational amplifier by turning on the potentiometern
So I have an idea! What if I measure the current flow over a small resistor e.g 25 Ohm. I then send the voltage difference back as a feedback signal. If I get +3.3v as the reference value, then I should have 20mA as current flow in the circuit.
Like this. But even if I tune in current flow to 20mA when I have +3.3v as reference, the current flow in the circuit WILL change if I change the circuit load.
Question:
How should I build up this feedback circuit so when I have +3.3V as reference input signal, then 20mA will flow through the 25 ohm resistor, no matter what arbitrary load we have?
AI: If your load resistor Rload can be "floated" so that neither end is grounded, the circuit becomes very simple.
As you say, a current-sense resistor (120 ohm) is used to sample transistor emitter current. Transistor collector current is very slightly smaller. Opamp must be of "rail-to-rail" type, because output voltages and input voltages can range anywhere from 0 - 3.3V:
simulate this circuit – Schematic created using CircuitLab
However, if one end of your load must be grounded, the circuit becomes a little more complex. |
H: How do I invoke pymcuprog?
I have successfully created an ATtiny406 binary using avr-gcc in Linux, and I can program it to the device via Atmel-ICE under Atmel Studio in Windows, and it behaves correctly.
I would prefer to be able to program the device using my Atmel-ICE in Linux. So I did pip3 install pymcuprog-3.7.4.95-py3-none-any.whl and it says:
Successfully installed appdirs cython hidapi intelhex
pyedbglib pymcuprog pyserial pyyaml setuptools-20.7.0
It seems that the packages were installed... but how do I run the utility? As suggested on the mymcuprog website, I tried pymcuprog --help but that said
pymcuprog: Command not found.
I also tried python3.5 -m pymcuprog --help but that said:
/usr/bin/python3.5: No module named pymcuprog.__main__;
'pymcuprog' is a package and cannot be directly executed
I also tried python3.6 -m pymcuprog --help but this is even worse:
/usr/bin/python3.6: No module named pymcuprog
I don't think Ubuntu16 supports Pip in Python 3.6 (I am at the latest python-pip3, and it seems to be Python3.5). In my experience, it's best not to go beyond versions tested by Ubuntu.
Regardless, it seems that pip3 installed the pymcuprog modules in Python3.5 without complaining. I thought pip3 would install the main module as well? (it has worked for me in the past, on other software). If not, then how do I get the main module and run it?
AI: Protip: never run 'pip3 install' without --user; when not working in a special prefix! othewise, it installs stuff in system-wide locations, which are different for different systems and sometimes (and that's the case here) not even looked in by default.
So, uninstall what you've installed using pip3 again, then set up a Python virtual env:
python3 -m venv cloudsenvironment
Then activate it:
source cloudsenvironment/bin/activate
Then you can run pip3 without danger of polluting/breaking your system. That activation bent a few paths, so that with the shell you ran that command from, commands and Python modules are now run from the folder that was created during venv creation. Try it out! |
H: Electrical Circuits Analysis Help
Am I correct? Or does D= V1-20, which allows the value of V1 to be positive?
AI: Yep, V1 minus 20 in equation makes more sense.
If V1 was negative that would mean current is going into node from each branch which is impossible due to KCL. |
H: ADC is giving fault codes when I touch thermistor attached to it
I am working on writing a control program for my pellet smoker. I am having an issue with the meat thermometer that came with the smoker.
I have a MAXIM 31865 ADC that is connected to a Raspberry Pi. Attached to the ADC I have a thermistor that is used as a meat thermometer. This thermistor (unknown type/manufacturer) is inside of a metal sheath and has 3 wires that were previously connected to the smoker. There are 2 internal wires that connect to the thermistor and then the metal cable sheath itself was also connected to the original control board on the smoker.
I only have the 2 internal wires that actually connect to the thermistor connected to my ADC. The third wire (thermistor sheath/cable sheath) is not connected to anything. If the thermometer is sitting on the grill or on my desk I am able to get an accurate measurement from the ADC. However, if I pick up the thermistor or touch it more than just lightly brushing it I instantly get an overvoltage/undervoltage error on the MAX31865 ADC that doesn't go away until I put down the probe.
I assume I need to ground that third wire by clipping it to a metal surface on the grill or back through the power supply, but I am trying to understand what is happening here. I tried measuring for a voltage drop/increase while touching the thermometer and I couldn't see anything happening when measuring with a multimeter, and I don't understand why it would be a problem anyway.
(I am not an electrical engineer, just a hobbyist.)
Image of how it is wired I put together before I started.
The black triangles go to a strip of copper on the solderable breadboard. The empty triangles go to a different strip of copper, and then the grounds are merged to a single ground pin on the Raspberry Pi. The small black boxes are ferrite beads. I pretty much just followed exactly what the Max31865 docs said, but since I had a single power source I split it with a ferrite bead between the source and the analog voltage input.
I used a 350k Ohm reference resistor because the thermistor has a resistance of around 300k Ohms at 0 degrees celsius.
AI: You need to ground the shielding (sheath). This is what keeps you from capacitively coupling noise into your high impedance inputs. Ground reference to the ADC is important. If you don't ground the ADC circuit board, at least put a 1 MOhm resistor to ground to provide some reference.
Edit: There may not be a good reason to keep your digital and analog grounds separate since the analog and digital VDD are the same. The shielding of your probe should be connected directly to the analog ground. Still, if you want the isolation, put the 1 MOhm between the analog and digital grounds. |
H: Beginner problem: Debugging power supply
I am debugging my power supply using an oscilloscope. I have uploaded pictures of my setup, including oscilloscope reading, wiring (just power and ground with jumpers), and the power supply specs. The power supply is supposed to output 5V 10A DC, but the oscilloscope at 1X is reading ~480mV. Is this power supply bad, is there something wrong with my setup, am I reading the oscilloscope incorrectly, or is it something else?
AI: Agree about the probe.
Also, switching power supplies, especially low-cost ones like yours, usually have a minimum load requirement. 10% is a common requirement "to stabilize the magnetics" (ooooold app note phrase). Without this, the supply is not guaranteed to maintain regulation. |
H: Solar panel charging issue
I have a 160W solar panel that has a built in regulator that I have hooked up to a 100Ah 4 cell Lithium Ferro Phosphate battery.
When I connect them the panel starts charging anywhere between 5 - 7 amps and get the battery to full without issues. The problem I have is once night time has passed, the panel will not charge the battery unless I unpug it from the battery first. The battery box voltage meter sits at 13.4v when fully charged, and with the 12v fridge running on it through the night it got down to 13.1v. I waited till midday and the voltage remained at 13.1 even though the panel was in full sun by that time and my inline amp meter displayed 0amps the few times I checked it though out the morning. Strangly with the fridge still running the battery voltage did not drop further either though it only used 0.3 V for the whole evening proir and night. When I disconnected the panel and reconnected it immediately, it started charging at 6 amps. While it was charging, I then covered the panel and it went to 0amps as expected. When I removed the cover, it started charging just fine. I am at a loss as to why it wont start charging by itself in the morning.
EDIT
After a comment about the potentially non standard battery voltage got me a little worried, I have added the information I could find.
The battery spec sheet
AI: As described, This is a normal operation. But I think the charging profile is not suitable for Lithium batteries.
In fact, there are different charging profiles.
When you first apply the solar power the battery will be charged to a threshold value which is 13.4V in your case. Then the controller will wait till the voltage drops below another threshold (which is below 13.1V in your case).
Charging profiles change depending on the type of the battery. The following is for instance the profile used by Deepsea chargers (not designed for lithium batteries):
the float voltage definition fits with your 13.1 V.
To find out if it is really working as needed, apply a continuous load and wait to check below which threshold it will restart charging.
the following graph is an example of a charging profile for a lithium battery that is not voltage float charged and where the current is kept constant instead of the voltage (here the voltages are different than in your case) :
Some battery chargers are profile configurable either by software or simply by a button like the following:
Choosing the right profile increase the lifetime of the battery.
Since you edited your post to add the battery type. Here is one of the suitable profiles included also in a Deepsea charger for Lithium Phosphate:
In fact in this case the float charge is terminated when the charging current is too low and then after either the charge termination timer elapses or the battery is discharged the cycle will restart.
Concerning your controller, it is clearly mentioned in the manual that the battery type is "any 12V DC battery that is used for cars, boats, motor cycles, etc..." so it is mainly a lead acid battery. The charging profile suits these kind of batteries. But this is not to say that it wont work for your case. However it is not the optimal profile for the lifetime of your battery. |
H: Can we program our smartphone camera to detect the frequency and time period of IR light?
My use case is this: I am emitting morse code using a sole IR light source, and I intend to detect the frequency and the time period of the light source using my smartphone camera. Can this be done?
AI: It depends on whether the smartphone camera actually sees IR. Most seem to these days.
More here: https://petapixel.com/2019/08/22/how-i-shoot-infrared-photography-with-a-smartphone/
This article shows the Samsung S10E, which apparently works. I looked into whether iPhones can see IR; it seems only the front camera can, but not the back, due to the back camera having an IR filter as part of its optics.
At any rate, you can test this on your own phone with just an IR TV remote and smartphone video: if the camera can see IR you will see flashing when the remote is pressed. |
H: Purpose of resistor to ground in video-amp circuit
I was watching how to make a nintoaster, that is, an NES (Nintendo Entertainment System) in the form of a toaster. The video is here. One of the things he does in the video is building a video-amp circuit for the video signals from the NES. The circuit looks like this (13:40 in the video).
I believe I understand somewhat what this circuit does (but feel free to point out if I'm wrong). The "VIDEO IN" signal controls the state of the transistor, that is how much current is flowing through it. The \$33\Omega\$ resistor is there to limit the current, and the "VIDEO OUT" is the output signal.
But I'm not sure why there is a \$220\Omega \$ connected to ground. What is the purpose of that resistor, if it even has any?
AI: But I'm not sure why there is a 220Ω connected to ground. What is the
purpose of that resistor, if it even has any?
For any BJT amplifier you have to have a standing (or quiescent) DC current that passes from the positive supply (in the case of an NPN BJT), through the collector to the emitter and then to ground; in your case through the 33 Ω and 220 Ω resistors. It won't work without that standing BJT current. That current biases the transistor into correct operation. |
H: SAW resonator oscillator operating principle
I can't for the life of me figure out how the following SAW oscillator works. I can't see how the oscillation would be maintained since it doesn't use a feedback mechanism I'm familiar with. This post could prove useful.
So here is what I was thinking:
L2 blocks high frequency components from coupling back to the power supply, though I'm not sure why this would happen (150nH from dimensions given).
C3 is a supply bypass cap, which likely has something to do with L2 though I'm not sure what. On some boards this doesn't seem to be populated (unknown value).
L1 presents a high impedance at the oscillation frequency (35nH from dimensions given).
T1 is an RF transistor in a Class C amplifier configuration.
R1 sets the transistor bias and possibly the transmit power.
R2 and T2 enables the circuit.
C2 presents a low impedance path to the antenna to remove the DC component (unknown value).
I'm not sure what C1 is for, and it seems to be unpopulated on some of the boards. For the SAW resonator, the only thing I can think is that when turned on it rings for a bit then dies down, so continuous transmission wouldn't be possible.
I'm hoping someone could shed some light on my assumptions listed above, and maybe on how the component values were selected.
AI: C1, C3 and the SAW filter from a pi filter that in conjunction with the output impedance of the T1's collector produce a perfect 180° phase shift from T1's collector to base at one particular frequency. Given that T1 is configured as a common emitter amplifier, it inherently produces signal inversion (also 180° of phase shift) and therefore, together, you get 360° of phase shift at one particular frequency and the circuit oscillates when T2 (the modulation transistor) is activated.
360° of phase shift is of course the same as 0° of phase shift and that is part of the criteria needed to make a successful oscillator.
The SAW filter very much behaves like a proper crystal and here is a full tear-down of how a crystal oscillator works. Your circuit is quite similar to the Pierce Oscillator that is commonly used in crystal oscillators. It's basically a spin-off from the Colpitts Oscillator. |
H: Digital output of the STM32L152RE: toggling between +/- V?
Is it possible to get the digital data output from the STM32l152RE microcontroller to toggle between +/- volts? where the +V will represent the logic-1 and -V will represent the logic-0, instead of conventional 0 V as logic-0 and +V as logic-1.
AI: No, MCU can't output other digital voltages that is has, so it will only output 0V and 3.3V. |
H: Ultrasonic HC-SR04 sensor, why does it even have a dead-zone of 2cm?
So I have a hard time understanding why the HC-SR04 has a dead-zone of 2 cm.
I understand why, the single-transducer ultrasonic distance sensors have a dead-zone, as they need to switching - But this is not the case with HC-SR04 as it both has a sending and receiving transducer.
Anyway, if this is due to the fact, that the wavelength of 40khz is around 0.8cm and that it needs to produce 8 full impulses (see the datasheet) - Shouldn't it then be 0.8*8 / 2 = 3.4cm?
AI: Probably the physical layout and size of the transducers. Sound gets more directional as frequency rises, the received needs to be able to get the echo. Looks like the transducers are about 3cm apart (judging by the 0.1" connector). The transducers have a polar pattern which is almost certainly the cause of the range restriction. |
H: LTspice error: No AC stimulus found
I am using LTspice to simulate the AC analysis of a circuit. When I open the netlist of the circuit in LTspice and run the simulation, I get the following error:
An example of the netlist that I'm using is:
I have placed an independent current source between nodes 5 and 1 in the first line of the netlist: I1 5 1 AC 1. I want to know why am I getting this error and how to correct it?
AI: I think the first line in a netlist must be a comment. |
H: Why for setup check AND gates use rising edge, while OR gates use falling edge and vice versa for hold check in clock gating?
I have two questions on set_clock_gating_check SDC command.
Why for setup check, AND, NAND gates use rising edge, while OR, NOR gates use falling edge ?
Why for hold check, AND, NAND gate use falling edge, while OR, NOR gates use rising edge ?
AI: For AND and NAND gate, the output is known if any one of the inputs is 0. So the input that is 0 can be said to control the output, and that input can be said to be non-controlling if it changes to 1. So rising edge of the clock changes the clock signal from controlling input to non-controlling input. Setup check is the duration for the inputs to remain stable before the active edge of the clock, while hold check is the duration for which the inputs have to remain stable after the active edge of the clock. Here we can consider that when the clock signal is 0, it controls the AND/NAND Gate, and when it is 1, the control goes to other inputs. So when the clock is 1 it can be said to enable the AND/NAND gate. When clock is 0 it can be said that the AND/NAND gate is disabled. So for setup check we consider the rising edge of the clock. For hold check we consider the duration for which the inputs have to be stable when the gate is disabled, and this occurs on the falling edge of the clock for the AND/NAND gate.
Similarly for NOR and OR gate the output is known if any one of the inputs is 1. |
H: 4-20mA sensor to a microcontroller
I need to connect a sensor with a 4-20 mA loop to a microcontroller.
The MCU operates at 3.3V and the following image is from the sensor's datasheet.
I'm planning on converting the 4-20mA to a 0.6-3V range to be measured with the internal ADC.
I'm trying to figure out if the setup would work if I power the sensor via a 12V boost.
From what I understand it should be ok as the drops would be 3V (in case of max output) + 6V required for the sensor's operation or if just providing 6V would be enough?
UPDATE
Adding sensor's datasheet
https://www.calex.co.uk/site/wp-content/uploads/2015/07/pyrocouple-infrared-temperature-sensor-data-sheet.pdf
AI: Your thinking is correct that you need 3 V on top of the 6 V, so providing just 6 V wouldn't work out as the voltage for the sensor in the loop would drop below 6 V.
The maximum loop impedance is given for a supply of 24 V. It's actually \$R_{max} = \frac{V_{supply}-6~\text{V}}{20~\text{mA}}\$.
You can turn this around to get the minimum supply: \${V_{supply}}_{min} = 6~\text{V} + 20~\text{mA} * R_{loop}\$. In your case that is 150 ohm, so 9 V.
You might also want to leave some space for error values like > 20.7 mA or something, depending on the sensor.
This is different for devices with an active output (3-wire, 4 wire). |
H: Memory row driver does not have enough driving power
I am building the row driver of a piece of RRAM. Different from traditional memory, RRAM's cells are composed of resistor-like elements. I tried using a single inverter and a back-to-back inverter design (shown below), but they don't have enough driving power. The row line voltage is only about 25u Volts when connected with a grounded, 200-Ohms resistor. Ideally, the output voltage should be the same as the pumped voltage (VDDP). I am wondering how to achieve this?
Thank in advance for any advice!
AI: This is a common problem in VLSI design. You just need to add pairs of inverters in series, increasing the W/L ratios of the transistors by a factor of about 4X in each subsequent inverter as you get closer to the row line load. Each pair of inverters acts as a non-inverting buffer, but the drive strength increases by up to 16X.
The transistors in the inverter will start to get large, so you will probably need to use multiple parallel transistors (really just fingers of gate poly) to make the big inverters. |
H: What does this multimeter tempco spec mean?
This is the accuracy spec for an Agilent 34401A 6-1/2 digit multimeter. What exactly does the "temperature coefficient" mean here? Does it mean that the reading will not change by more than 0.0005% (5 ppm) over the whole temperature range 0 to 18C? Or does it mean 5 ppm/C? And why are there two separate temperature ranges 0-18 and 28-55? (I understand that it gives % of reading plus % of range.)
AI: If you look at the manual for the 34401A multimeter, you will find that the specification you quoted in your question is missing some information. In the manual, the same specs appear except that in the last column the term "temperature coefficient" is directly followed by "/°C" which indicates that the numbers are to be taken as per °C. This makes sense since the number applies over a range of 0-18C (span of 18 deg) and a range of 28-56C (a span of 28 deg). A drift of 5ppm per degree C is a lot more reasonable than a drift of 5ppm over a span of 28 deg (equivalent to a drift of only 0.18ppm per deg). |
H: Why is the mechanical power of a DC brushed motor at a maximum at around 50% of the stall torque?
I don't currently fully understand how a DC brushed motor reaches maximum mechanical power at around 50% of their stall torque. I know that a decrease in the speed of the rotor will decrease back-EMF and in turn, increase current, thus the increase in power.
However, I don't exactly understand why the maximum power stops at around 50% of the stall torque. What determines the stop at 50%?
AI: It's because, for a DC motor, the torque curve and speed curve are both linear and of opposite polarity slopes and both curves also start and end at the same place. 100% torque = 0% speed, and 100% speed = 0% torque.
Those two facts mean that the adding them up will always equal 100%. Specifically, 50% of one corresponds to 50% of the other, and since power = torque x RPM, this just happens to be when the multiplication of the two is the maximum.
At one extreme end, you have 100% torque but 0% speed so the multplication gives you zero output power.
At the other extreme end you have 100% speed but 0% torque so the multplication again gives you zero output power.
Taken from:
https://www.vexforum.com/t/motor-torque-speed-curves/21602
Geometrically, it is identical to why a square is the rectangle with the most area if the sum of all sides is a fixed number. If one side is speed and one side is torque with power being the area, and the sum of length and width must be 100, a square of 50 x 50 gives the most area. |
H: Creating a footprint for a component with plastic "bumps"
I think that I can best describe my question by giving two examples:
This is a TRS audio jack: https://uk.farnell.com/lumberg/1503-03/connector-receptacle-3-5mm-phono/dp/1216980. It has two plastic "bumps" (I don't know how those things are really called) and the datasheet clearly shows their location and dimensions.
This is a card edge connector: https://nz.rs-online.com/web/p/edge-connectors/6812238. It has four plastic "bumps" at the corners and the datasheet doesn't mention almost anything about them.
So I assume that in the first case, the plastic things are for mechanical stability, they need to fit inside the pcb, and when I'm creating the footprint for the component I need to add (possibly NPTH?) pads for them.
And in the second case, I assume that the plastic things are for something else (spacing? I don't know?) and I don't need to include them in the footprint.
Is my line of reasoning correct?
AI: Yes, on both counts.
Pegs, like the ones on the audio jack, are common on surface-mount parts that undergo stress, like switches, pots, and jacks. Basically, you don't want to count on the solder pads to hold the thing in place; the pegs give it that much more mechanical stability.
Standoffs, like the ones on the edge-card connector, are not uncommon. The're there to give the solder room to wick up the legs of the connector a bit.
It's unfortunate that these things don't come with layout suggestions -- it's common for parts like this to come with an example footprint, including any guide holes. I'd be inclined to do a bit of digging for other sources -- that Lumberg part may have a footprint on the next page of the catalog or in a different document; the Tyco datasheet looks like part of a larger document that Radio Shack just hasn't bothered to show you. |
H: Simulated bode plot in Simplis is different from the result of Mathcad
I built an open-loop-controlled buck converter in Simplis.
The parameters are listed below:
\$V_g=12V,V_{out}=5V\$
\$D=0.52\$
\$C=100\mu F,ESR=5m\Omega; L=30\mu H,ESR=5m\Omega\$
I use Simplis to simulate the bode plot of \$G_{v_d}\$. And I use the Mathcad to simulate the same circuit. But the result was different. During the low frequency, the peak gain margin in Simplis is less than 0db, while in the Mathcad, the gain was about 20DB.
Here are the results I got.
the circuit in Simplis
the Gain bode plot in Simplis
the code in Mathcad
the curve in Mathcad
I don't know what kind of mistake leads to this result. If someone knows the reason, would you please tell me? I am a freshman in Power electric and I appreciate your guidance .
AI: The control-to-outout transfer function of the CCM buck operated in voltage-mode control is the following:
Several comments regarding your results:
in Mathcad, you have missed the pulse-width modulator (PWM) gain which is \$G_{PWM}=\frac{1}{V_p}\$ with \$V_p\$ the peak of the modulation ramp. This is your source \$V_3\$ in the SIMPLIS file. It must obviously appear in the control-to-output transfer function that will use for analytical results.
the quality factor includes many contributors as you can see. In the formulas, these are values of passive elements such as ohmic losses of the capacitor and inductor. In reality, other losses affect \$Q\$ such as the MOSFET \$r_{DS(on)}\$, the diode recovery losses and even magnetic losses. This is the reason why you may see differences between your Mathcad analysis and the SIMPLIS one or a bench prototype. To make sure you have very close results between simulations and Mathcad, chose perfect switches in SIMPLIS (and not a MOSFET), 0-\$V_f\$ diodes etc. Finally, you may be interested to import SIMPLIS simulation results directly in Mathcad to superimpose magnitude and phase curves. See here for a tutorial.
Finally, you can have a look at the 60+ simulation templates I posted here for my next book on small-signal modeling of switching converters (TOC is here). Most of the examples work on Elements, the demo version and you can explore many structures. |
H: STM32 - Getting HardFault when assigning uint32_t value
Getting hard fault when assigning this 32 bit value to the shown variable(val_32);
uint32_t val_32 = *((uint32_t *)( &buffer[0] + length - 4));
or
uint32_t val_32 = *((uint32_t *)( &buffer[length - 4] ));
Same thing but tried:
uint32_t val_32 = *((uint32_t *)(buffer + length - 4));
Declarations of the variables used;
uint8_t buffer[200];
uint8_t length;
System:
MCU = STM32F072C8T6
Keil v5.23
When i debugging, hardfault occurs at stepping over this code. Though without stepping over i put that code to the watch window and get the result i expected. Dont know what is my mistake or how to fix it. Thanks for the help.
AI: This looks like an unaligned access, which causes hard faults on Cortex M0 cores. M3 & M4 cores can handle this with some performance penalties.
Basically, when you try to access a memory location using a uint32_t pointer, the compiler generates a 32 bit word access instruction. This word address must be aligned to a 4 byte address boundary. In other words, the address must be divisible by 4.
Your uint8_t buffer may or may not be aligned to 4 byte boundary. Actually, unless they are in structs, compilers seem to align all variables on word boundaries. Of course you can't count on that, so it's better to demand it explicitly. In GCC, you can align variables like below:
uint8_t buffer[200] __attribute__ ((aligned (4)));
Unfortunately, this kind of alignment won't solve your problem in your situation. Because even if the start of the buffer is aligned, you are using uint8_t pointers to access it. So unless the length is divisible by 4, you will still cause unaligned access faults.
You may need to reconsider your memory layout. Or you can manually extract uint32_t from the buffer. memcpy() function is a well known and easy to use solution in these situations.
#include <string.h>
uint32_t val32;
memcpy(&val32, buffer + length - 4, sizeof(val32)); |
H: Finding the kVA loading after improving the power factor
This is a homework question I've been attempting. But the answer I get doesn't match with whats given and I don't understand why my method is wrong.
This is the question with answers.
Here's what I've done.
I'm still new to these calculations. Thanks in advance.
AI: Basically you have (a) calculated the kVA loading correctly when the power factor is 0.65. In simple terms it's 55 kW ÷ 0.65 = 84.615 kVA.
You have also (b) calculated the current correctly i.e. 84.615 kVA ÷ 415 = 203.89 amps.
And, for (c) it looks like the set question is wrong because, when you have unity power factor, the power consumed by the load equals the VA hence, the answer is 55 kVA. |
H: How to Create Clocks on FPGA Board
I am using an A7-100T FPGA development board used for a project. Up to this point, I have only been doing simulations on Vivado with simulated clocks or when I have been using the board, I have not had to create any clock signals due to only having basic combinational logic designs.
What is the simplest way to create clock signals? I have read tutorials and watched videos and it seems you have to create an IP package and use the HDL code as a component within the top-level file on the design?
I need to produce two clock signals at the same frequency with one of them having a 90-degree phase shift.
AI: The A7-100T has an on-board clock and the Xilinx XC7A100TCSG324-1 FPGA itself has a very comprehensive clock generation system built-in.
The datasheet: Xilinx 7 Series Datasheet
Highlights it as: |
H: large inductance compensation methodology in chip power delivery network
i am trying to learn on compensation networks design.
Suppose our power delivery network from VDD to the chip has very high impedance at certain frequency.
i know that we cn use smith chart to design such network.
i dont know the exact terminology or if smithchart is the best way.
i'll be glad for a recomendation on methodology for these purposes.
Thanks.
AI: Suppose our power delivery network from VDD to the chip has very high
impedance at certain frequency.
Then will likely be the root cause of likely power supply problems and this is what needs to be fixed. Fixing this can mean a series of distributed decoupling capacitors at each node where the chip's VDD pin is. It can also mean using proper ground planes for 0 volts.
i know that we cn use smith chart to design such network.
Well, you could use a smith chart but, these days, you'd use a simulator to mimic the disposition of Vdd and ground and see where the problem frequencies are. Fix with capacitance or increased/improved copper thicknesses and ground plane.
i'll be glad for a recomendation on methodology for these purposes.
Model the problematic power feed within a simulator. Simulators are good and free and will eat this problem up with ease. |
H: STM32 - Reading EEPROM via I2C Delay Problem
Im trying to read/write EEPROM byte by byte but if i dont put an enough delay(~1ms) between read/write tasks, im getting or writing wrong value to the EEPROM. But this delay is taking a significant time when there is a many bytes to read/write and 400kHz losing its meaning. Am i missing something ? or its a nature of byte by byte process is slow. Thanks for your time and help.
MCU = STM32F072C8Tx
EEPROM = 24LC64
I2C Settings:
Speed = 400 kHz
Rise Time = 300 ns
Fall Time = 300 ns
Analog Filter = Disable
Digital Filter Coefficent = 0
Basic R/W Code Sample:
HAL_I2C_Mem_Write(&hi2c1, device_addr, mem_addr, I2C_MEMADD_SIZE_16BIT, data, 1, 500);
Hal_Delay(1);
HAL_I2C_Mem_Read(&hi2c1, device_addr, mem_addr, I2C_MEMADD_SIZE_16BIT, databuffr, 1, 500);
AI: Yes, you are missing the time it takes to write anything, even if the bus is 400kHz.
The first page of 24LC64 says that a page write is max 5ms. You can write one byte or up to 32 bytes (a full page) at once but it can still take up to 5ms.
So after a write operation, the EEPROM will be busy writing the data and will not respond to any operation until it is finished with the write. |
H: How can 24 V (two 12 V batteries) x 100 A power a 5 kVA (or 4000 W) inverter?
I am a beginner at this. I know that energy cannot be created nor destroyed. I know you can't use 3 kVA inverter to run a 5 kVA dynamo (assuming I want it that way). Why then is 2400 W (12 V x 2 x 100 A) able to power some 5 kVA (4000 W with 0.8 power factor) inverters? Is there anything I don't understand?
AI: Most batteries are able to supply far more output current than the "nominal" value you find in data sheets. In fact some will supply nearly "infinite" current until they destroy themselves.
As far are your 100A batteries supplying enough input to an inverter to generate 5KVA there are two possibilities that come to mind:
The inverter has the CAPACITY to generate 5KVA but in this application it's only producing 2.4 KVA so the batteries are able to supply the demand.
The batteries are, in fact, supplying MORE than 100A each and the inverter is able to product its 5 KVA output, for a while at least.
But the fundamental question that you seem to be asking is: "How can I get MORE than 2400W out of a 24V 100A battery system?"
The answer to that is YOU CANNOT, at least not in this universe. A 24V 100A supply to an inverter, even with 100% efficiency will NEVER supply more than 2400W of output power. |
H: Capacitance per centimeter
I need to design a 1 pF interdigital capacitor for a MMIC design, so I am using an online calculator to find out the dimensions ( https://www.rfwireless-world.com/calculators/interdigital-capacitor-calculator.html), there is also an easy formula for that. Everything is straightforward but the final capacitance out of the formula and the online calculator is Pico Farad per cm, whereas I am asked for a 1 pF capacitor. So, I wonder do I need to consider that per cm here when I am designing it and get rid of it or should I assume to design a 1 pF capacitor means 1 pF/cm.
AI: So, I wonder do I need to consider that per cm here when I am
designing it and get rid of it or should I assume to design a 1pf
capacitor means 1pf/cm.
It's likely that the math behind the calculator is approximating the interdigital capacitance by straightening out the interleaved fingers and treating the "problem" as a 2D problem i.e. it is working out the capacitance between two parallel planar electrodes such as in this QuickField article on calculating interdigital capacitance: -
So, when the wireless world calculator talks about capacitance per cm, the "per cm" refers to the length of the line snaking through the mid position between the two electrodes in the picture above. |
H: Can an inverter through a battery charger charge its own batteries?
I understand some of these laws of thermodynamics. An alternative to this question, being inventive, is 'Can an inverter power a dynamo (which is powered by an electric motor) to charge the same inverter batteries?' It may seem redundant but I am pushing the limits with a beginner knowledge.
AI: can an inverter power a dynamo
If the dynamo has field windings then, yes it could power the dynamo's stator windings and the main power would come from the electric motor and, that arrangement could top-up charge to the batteries. But, you might just as well not use the inverter and instead, directly attach the battery (via the appropriate circuit) to the dynamo field windings. The battery power into the dynamo field windings will be a small fraction of what could be extracted from the dynamo rotor winding to re-charge the battery.
But this then logically leads onto to the dynamo being excited by its own rotor output and, this is perfectly feasible so, a battery connection or an inverter connection is then redundant. The power from the dynamo that is left from it exciting its own windings can then charge the battery that feeds the inverter.
However, if you believe that the electric motor driving the dynamo can also be powered via the inverter from the same battery then that won't work. It can only work if there is a different power source for the motor. |
H: TLP383 ltspice XVII model not working because of encryption?
I tried to test the model of the TLP383 in ltspice which I downloaded from toshiba. TLP383
I added it as I always did with my models downloaded from the internet.
But this time ltspice gives me the error:
Unknown SPICE device type "1" in "17f62b..."
So I wanted to ask if somebody also experienced something like this or if anybody got a working model of the TLP383?
Thank you for your help.
Best regards.
AI: You can't open a Pspice model in LTspice. I don't see any SPICE models other than the Pspice one in the linked site.
To elaborate, yes it's true that SPICE is SPICE and LTspice is perfectly fine with unencrypted models made in Pspice, but every Pspice model I've ever encountered has been encrypted. As is the case with this one--note how line 14 in the .lib reads **$ENCRYPTED_LIB. |
H: In STM32, How to check the UART Serial Buffer If They Are Available using HAL Library
I am using STM32F429 in my project which connects to an ATmega2560 thru UART. In arduino, there is a function Serial.Available() to check if the serial buffer is empty or not.
Is there a function in STM32 that is similar to Serial.Available() in Arduino?
BTW, I am using HAL libraries in STM32CubeIDE and HAL_UART_Receive(huart, pData, Size, Timeout) to read the contents of UART buffer.
Please help. Thanks!
PS. I've read the documentation in HAL library but I'm not sure if there is one. I've looked into HAL_UART_GetState but Im not sure.
AI: AFAIK there is no similar function in HAL. The common way to do it is:
// index of first unread byte in the RX buffer
static uint32_t rxpos = 0;
// find end of data position in the RX buffer
uint32_t pos = rxsize - puart->hdmarx->Instance->CNDTR;
// pass accumulated bytes to listener
if (pos != rxpos) {
if (pos > rxpos) {
// Process data directly by subtracting indexes
processData(rxbuffer + rxpos, pos - rxpos);
} else {
// First process data to the end of the buffer
processData(rxbuffer + rxpos, rxsize - rxpos);
// Continue with beginning of the buffer
processData(rxbuffer, pos);
}
}
// update starting index
rxpos = (pos == rxsize)? 0 : pos;
You can call the above code from HAL_UART_RxHalfCpltCallback, HAL_UART_RxCpltCallback and, if you have code for processing either IDLE or RTO events, from corresponding handlers as well.
Note that current HAL implementation treats RTO as an error, eliminating the possibility to handle it properly. You'd have to bypass that if you want to get circular DMA transfers handled without added latency.
UPDATE
To answer your question in comments, please note that checking for available bytes in a buffer and using blocking HAL_UART_Receive call are two different approaches. You did not specify baud rate or whether or not you have variable message length. These two really define correct method of doing things in STM32.
But in general, you either need low latency communication, or you don't care how long it takes and your code has nothing to do meanwhile.
In the former case you have to use either interrupt or circular DMA versions of Receive function. With DMA you can check for available data length as described above.
In the latter you can do a loop calling blocking Receive which will either give you expected data or will time out. At low speeds you may even be better handling incoming data byte by byte in UART interrupt handler.
The methods available to you are quite different from Arduino approach. I would suggest to follow the examples available for HAL instead of trying to fit HAL functions into Arduino perspective. |
H: MESFET in LTSpice
It is really confusing to me. For some reasons I am migrating from Microcap SPICE3 to LTSpice. While in Microcap there was no issue with Statz model for GaAs MESFET, I cant figure out how to incorporate Statz model into LTSpice.
I tried almost every possible combination, but not the right one yet :)
According to LTWiki, LTSpice is also capable of simulations with Statz model of MESFET, see link.
What am I missing here? Can you suggest me step-by-step reference applied specifically for Z type devices?
Edit: SPICE3 model in Microcap is given by
.MODEL T1 GASFET (LEVEL=1 AF=1 ALPHA=2 B=300m BETA=0.08 BETATCE=0 CDS=0.15P
CGD=0.05P CGS=0.3P DELTA=0.2 EG=1.11 FC=500m GAMMA=0 IS=10f KF=0 LAMBDA=0 M=500m
N=1 Q=2 RD=0.62 RG=0.98 RS=1.38 TAU=7P TRD1=0 TRG1=0 TRS1=0 VBI=1 VDELTA=200m
VMAX=500m VTO=-1 VTOTC=0 XTI=0)
AI: You forgot to add the model name, or the type; difficult to say if you're using the default NMF name:
.model n nmf beta=80m vto=-1
+ cds=150f cgd=50f cgs=300f
+ rd=0.62 rg=0.98 rs=1.38
With this change (above), some help for an oscillator circuit (supply starting from zero), and a minor tweak (precharged output cap) the schematic will run:
Not all parameters will be recognized so they will be ignored. If you want to avoid pop-ups from the error log (see picture), simply delete them. |
H: JDS6600 signal generator - broken or normal?
I recently treated myself to a JDS6600-60M signal generator. It's a (very) cheap one but that's because electronics is just a hobby and not something I make money with or anything.
The other day I noticed that at very low frequencies (1-10Hz) the pulse wave looks like this:
10Hz
1Hz
I'm not quite sure if it did this before and when I increase the frequency the pulse gets more and more 'square' as you'd expect. From about 100Hz it looks pretty ok:
Still a bit 'round' but better and from about 200-250Hz upwards it looks fine.
Now I got to thinking and suddenly I realized I had used the signal generator about a week earlier to toggle a relay on and off (just for test-purposes) at about 1Hz frequency. And then it dawned on me that this could've cause some pretty serious flyback which may have damaged the signal generator? I had hooked the relay coil straight up to the leads to the signal generator; no flyback diode or anything else.
So my question(s) is/are:
Did I break my signal generator? Or is this "normal" (for cheap signal generators)?
Maybe someone with a JDS6600 can confirm this?
The manual has this to say about it:
From what I gather it should be able to generate a signal from 0 to 15Mhz but I don't know if it's supposed to be guaranteed* a square signal over the entire range. Also: It let's me adjust up to 60Mhz just fine for a square wave and I can confirm this on my scope (though at the higher frequencies the square starts to look like a sine, which is pretty normal I think for a 110MHz,1GSa/s (also cheap) scope).
* As far as cheap devices make, let alone meet, these 'guarantees' ofcourse.
AI: It's quite possible that your oscilloscope is measuring with the coupling feature set to AC. This will block DC levels and entirely produce what you show in your pictures. This symbol might be the clue: - |
H: LED Driver Pin function
I have this LED Driver - A80604
I need to generate 30V for my LED Driver String for generating 300mA.
In a previous question, I received an answer that I need to fix the current using the LED Driver to 300mA and the output voltage will take care of itself.
My question 1 - So, in that case, I want to understand whether this is a synchronous boost LED Driver circuit ? I am asking because what should I connect to the GATE and GDRV pins to?
My question 2 - What would be the voltage at the pins LED1-LED4 ? Would there be 0V after all the forward voltages of the LED or would there be some residual voltage after the forward voltages of the LEDs?
AI: My question 1 - So, in that case, I want to understand whether this is
a synchronous boot LED Driver circuit ?
If you mean "boost" and not "boot" then no, it's not a synchronous boost circuit because it uses a standard diode here: -
I am asking because what should I connect to the GATE and GDRV pins
to?
You connect GDRV as per the above circuit. The gate pin needn't be used if you don't need the optional circuit shown above.
My question 2 - What would be the voltage at the pins LED1-LED4 ?
It depends entirely on the dimming levels you have set on ADIM (pin 9) AND the natural forward volt drops of the strings of LEDs. Hard to be precise of course because this circuit controls current into the LED strings and, without a detailed LED spec it's guesswork. |
H: ADUM5028-5BRIZ inductivity problem
I would like to build an isolated DC to DC converter for low current.
I have found this nice chip, but in the datasheet there are two inductances listed but not in uH or mH. It's is only in Ohms. Can you help me please?
Datasheet: https://www.mouser.de/datasheet/2/609/adum5028-1378717.pdf
AI: This is a very capable and unusual DC to DC converter.
Operation frequency is set internally to 180 MHz (!).
Power is transferred via an internal air cored transformer.
The inductances mentioned in table 20 (as ScienceGeyser indicated) are ferrite beads on the Vss and Ground leads. Their function is to isolate IC generated noise from the power lines. The beads play no part in the frequency determination.
They are specified to have an IMPEDANCE of >= 1.8 kohms across the frequency range 100 MHz to 1 GHz.
Manufacturers of appropriate ferrite beads will specify impedance with frequency in their data sheets.
Two examples of suitable beads are listed in table 20
Manufacturer ............ Part No.
Taiyo Yuden ............. BKH1005LM182-T
Murata Electronics ... BLM15HD182SN1
By examining datasheets for these products you should be able to find equivalent products if these are unavailable to you.
___________________________________________
Digikey
BLM15HD182SN1D - 20 cents in 1's.
BKH1005LM182-T 10 cents in 1's
ADUM5020 5V version. $7.67 in 1's.
Datasheet
Evaluation boards and application notes here
Not quite to spec but reasonable
Taiyo Yuden FBMH3225HM202NT
from here |
H: Transformer overdraw behaviour
TLDR; -
I want to connect such a low-value resistive load to a transformer that it draws ~1.6x the rated output current but I'll only do so for short periods of time so the transformer won't overheat. Aside from a drop in output voltage, is there anything that could occur that I should be aware of or transformers don't care as long as they stay cool?
I'm DIY-ing a JBC soldering station similar to Unisolder and MarcoReps' design and I have a question about the transformer VA rating that I need. This all started when I designed my station around a 130VA, 20V transformer and it worked really well, but the transformer barely got warm to the touch(IR thermometer clocked it at 36C) even when running at a high duty cycle while trying to heat a large copper bar. My iron works very similarly to JBC's and the above DIY designs where it only passes half or full waves to the iron with no phase control, reducing noise and interference.
This analysis of the internals of one of the JBC stations reveals that it only has an ~80VA transformer despite the station being listed as a 130W station. Further, based on the Vp values measured, the transformer appears to only output ~18Vrms under load instead of the 23.5V listed on the product page. I do see this other post about the power rating, so I understand to an extent how the voltage numbers make sense, but the power numbers still don't.
Based on all this, how would my transformer behave when I exceed the output current limit? My choice of 80VA, 18V transformer has an output current rating of ~4.4Arms. But the 2.5ohm cold-resistance of the tip would result in a 7.2Arms draw at least when the tip is still cold. From my research, the VA rating for large grid transformer applications is based on heating effects which won't be an issue for me since I'll be operating at very low duty cycles for the most part and my time-averaged power will be <80VA. However, could anything else occur due to these larger current draws? I was thinking saturation of the magnetic field in the core could play a role here? Will this just manifest as a slightly lower peak voltage?
AI: As you mentioned, there are two main problems with overloading a transformer:
1 - Overheating
2 - Voltage drop
Apparently you already have an estimate on both issues, but a detail about the transformer is that when you require 160% of load, the losses in the windings will increase to 256% of the rated losses, that is, the transformer can heat up a lot and the life time of the insulation can be drastically reduced. Of course, the time that it is under low load can compensate for this excessive loss of life during the overload.
About core saturation. In general, there are no problems. The flux densities in the core columns depends directly on the voltages, in which case I understand that you will apply the rated voltage. Due to the overload, the flux density can increase in some points of the core (since the stray field is added to the magnetization field), but the voltage drop, in this case, reduces the magnetization field and generally the thing remains far from saturation . |
H: Oscilloscope: trigger and multiple periods of the signal
I am trying to understands the basics of an analog oscilloscope trigger. As stated here,
Instead of collecting a single snapshot of the input signal, an
oscilloscope repeatedly updates the display with newly collected
snapshots. Each snapshot is called a trace.
The trigger is used to not overlap the successive traces of the signal on the screen.
Each trace begins when a trigger event occurs: that is, when the signal has a specific slope (positive or negative) and a specific voltage level. If I understood correctly, the trace begins each time a trigger event occurs. However, this would imply that the oscilloscope can show in its display just one period of the signal. Instead, there is plenty of images of oscilloscope displays, using the trigger, with multiple periods of the signal.
For example, from this page:
It seems that in all these examples the trigger is ignored for an arbitrary number of periods after the trigger event.
How is it possible? Isn't this a contradiction?
Edit:
The trigger is used to not overlap the successive traces of the signal
on the screen.
Sorry, I was meaning exactly the opposite. One of the answers points this out.
AI: Once triggered the trace runs to completion (at whatever speed you set the timebase to). Any trigger events occurring during the trace are ignored. After a trace has completed the next trigger event that occurs will start the next trace.
The trigger is used to not overlap the successive traces of the signal
on the screen.
Overlap can be interpreted in somewhat different ways. In this case we use the trigger so that each successive trace is drawn in the same place as its predecessor. So if we have a stable input signal we see an apparently static trace on the scope, even though it's actually being continuously redrawn from different samples of the input signal. |
H: What does the flange do over that rectangular cavity antenna?
I'd like to know what does this flange contribute to the rectangular cavity just below the flange.
I assume it's to reduce side effects, what else are the contributions the flange does to the cavity?
What's meant by infinite flange and finite flange?
Thanks in advance
AI: The flange around a cavity-backed aperture antenna has a significant effect on the radiation pattern. If the flange dimensions are large compared to the wavelength, it can be approximated as going on to infinity, which makes mathematical analysis simpler. If the flange is around a wavelength or less, you need one of numerical simulation programs. Roughly speaking though, the infinite flange gives a narrower beam, and the narrower flange gives a broader beam. |
H: What causes the electromagnetic waves to prefer one dielectric over another?
it's known, and there are some devices that exploit this phenomenon (see the references if you want), that if an electromagnetic source (antenna) is placed between two different dielectric layers, the waves will prefer one over the other according to the frequency.
Here you see an example from one my simulation.
Half-wavelength dipole antenna: it radiates equally in all directions of ϑ = 90° plane.
Now I put two different dielectrics (yellow = eps 20, blue = eps 2). Note that the permittivity is assumed to be constant on frequency (non-dispersive medium).
Conclusion: at such a frequency (3GHz, in which the dipole resonates), the waves prefer going towards the low permittivity medium (greater amplitude).
At other frequency the situation changes and the role of the two dielectrics may be reversed.
What I'm looking for is not a math explanation of that, but just an intuitive and physical view. If the antenna were a voltage source and the two materials were two parallel impedances, I'd say that electrons (current) will be higher in the easiest path (lowest impedance). But in this case there are not charge flowing, but just a wave. What causes physically a wave to prefer a material? Why should its electric and magnetic field be higher in one of them and why does it depend on frequency?
The electric permittivity is assumed to be constant on frequency, and the magnetic permeability is assumed to be 1. So, we can't say that the frequency - dependence is due to the material electric/magnetic polarization variation as frequency changes.
Example at 2GHz Now the waves prefer the yellow dielectric.
References: here (page 5) and here (page 3), two antennas are showed and they have both a substrate (like ordinary planar antennas) and a superstrate. The last one is used to send more power on the upper direction instead of to the lower one (where there is the substrate).
AI: Dielectric wall reflects waves. Reflection is complex because every time the wave meets a border of medium it gets partially reflected. Depending on wavelength, plate thicknesses, their materials and placements it's well possible that the radiation to one direction is stronger than to the other direction. The radiation is strongest to the direction where most of the waves (incident, reflected, reflected multiple times) travel with least phase cancellation. |
H: IMU BNO055 Alternative?
For a robotic application, we bought Adafruit boards of the very popular BNO055 IMU. We can still buy plenty of these modules, but it seems the BNO055 chip itself is unavailable. It even disappeared from the Bosch website without any information.
Why is this chip unavailable and what could be a good alternative that includes sensor fusion?
Ultimately, I would plan to use it with ROS.
AI: The fate of BNO055 is unknown at the moment. Quite for a long time Bosch was pushing its own alternatives without pre-programmed fusion software, like BMF055. On one hand it allowed developers to make their own changes, on the other you had to pay license fees for BSX software, which I suppose was the whole point.
Now Bosch partnered with CEVA Hillcrest Labs. CEVA programs Bosch parts with their own software and sells them as BNO085. I guess this arrangement is more economically beneficial to Bosch, so BNO055 started to disappear. Unfortunately, CEVA seems to be even hungrier than Bosch, you cannot even download the datasheet without registration with them.
You may try your luck with BNO085, of course. But in the long run you may be better using one of hundreds 6- or 9-DOF sensors and running your own fusion software separately. |
H: Confirmation on how to check microcontroller memory capacity with Atmel Studio
When compiling code in Atmel Studio, I get the following output showing the percentage of usage for the program and data memory:
Does this output represent the total data usage in the microcontroller?
AI: These figures will not include any memory used for the stack or any memory that you allocate dynamically at runtime. Also, if you have connected external memory devices to the microcontroller then the tools won't include that memory in the calculations.
Other than that, this should be your total memory usage. |
H: Transform TFT to AMOLED
I was wondering if it is possible to transform a TFT LCD into an AMOLED as both are active matrixes.
The composition seems to be similar.
AMOLED
TFT (LCD)
And if it wouldn't be possible, could I at least keep just the part that is very hard to manufacture (the TFT itself)? Maybe do the rest myself?
AI: A) no.
B) they're not really similar.
C) the other parts are also very hard. |
H: How to determine if conventional electronics circuit is a better choice over a MCU?
I am a software engineer and have limited experience with conventional circuitry. Recently, I have built a prototype device that uses an Arduino Uno, a Bluetooth module, an array of temperature/humidity sensors, and an OLED readout display, which currently relies on I2C. The function of my device is to simply wait to receive a BT signal, which wakes it from low power, then enables data acquisition by the temp. sensors for a fixed length of time while averaging at each read interval, and then displaying the averaged temp. & humid values on the OLED.
Ultimately, I am trying to determine if can move away from the ATmega328p chip and design a MCU-less circuit with conventional electronics for a production model. My project objectives motivating this idea are: achieving lower material and labor costs and eliminating the dependence on software and firmware. Both are equally important to me. However, I do not know if I am creating other substantial obstacles for myself in doing so. How can I determine 1) if what I’m asking is even possible to do given the functional requirements of my device and 2) if it’s the right choice going the MCU-less approach based off my project objectives?
AI: Yes, you could reduce the part count by getting an IC that has bluetooth and a microcontroller in on IC (there are some to look at here) which are becoming more prevalent on the market right now.
It wouldn't be a MCU-less circuit necessarily because the MCU and the bluetooth module would be combined. |
H: RFID antenna beam direction and how to control it
I've been working on some RFID stuff lately and I'm wondering if someone could help me understand how the beam direction works?
I'm having problems with my RFID reader picking up signals from tags that are behind it. From what I understand, the beam that powers the passive tags should be a cone and therefore it shouldn't be able to reach tags that are behind the reader, but this is being done by both of my RFID readers (the readers are Chafon UHF high performance integrated reader, model CF-RU5309). I did some more digging into my RFID reader and it's apparently omnidirectional but I'm not sure if this is possible. Does this mean it can read any tag within a circular area around it?
I've tried to put up some aluminum shielding on the wall behind where the reader is installed but that doesn't seem to have done much other than stopping tags that were really far back behind it from being read, while close ones can still be read. The shielding is a 3ft square placed at the center of the reader, with another rectangle of aluminum shielding that covers up to 2 ft above where the tags sit behind the reader. The tags being read are xerafy microX ii and are set up so that there's two tags on a large aluminum piece of metal. I've included a quick sketch of my setup and a link to the products I'm using. The tags being read are about 3 feet off the ground while the reader is about 10 feet above ground.
http://chafonrfid.com/productdetails.aspx?pid=692
https://www.atlasrfidstore.com/xerafy-microx-ii-rfid-tag/
The antenna is claimed to be a 9dbi circular antenna. Not sure if this means it can't be omnidirectional but I'm pretty sure that I found online that the reader is stated to be omnidirectional.
AI: The radiation pattern from an antenna this size at 900 MHz isn't going to have a sharp beam cutoff, like a cone. It's more like a soft, blurry bubble. Also radio waves will diffract (bend) at metallic edges, and scatter (reflect) from metallic surfaces.
You're relying too much on the RFID antenna's directivity. Use distance more. Three recommendations:
Mount the antennas as close as possible to the monitored zones. Maybe you can suspend them more directly over the cart path as it enters the door. The face of the device should point directly downward at the desired read zone. This gives the shortest possible distance to the target.
Turn down the power output. Remember 0 dBm is not zero power. Can you set it to -10 or -20 dBm.
The link indicates this device can report received signal strength (RSSI). This should be significantly lower when pings return from the stray device area. Use it to discriminate between desired and undesired readings. |
H: Can you connect two LM317s either in series or parallel to help spread out heat?
I am designing an adjustable voltage regulator.
The regulator will have be able to switch between maximum Vout of 9V, 12V, 15V and 18V and will have a variable resistor that can adjust the voltage from 1.5V to its selected maximum voltage.
The problem is the amount of heat dissipated.
The input will be 21V, to compensate for voltage drop, and the max current that needs to be handled is 450mA.
Using the power dissipated formula P=(Vin-Vout)*Iout.
The power dissipated will be P=(21-9)*.45=5.4 watts.
The maximum heat that a LM317 can safely dissipate is 15 watts with a heatsink. 5.4 watts is well below that which is good but still I'd like to err on the side of caution and spread the heat across two LM317s if possible.
AI: A couple of suggestions.
Forget about the LM317. Consider a DC-DC converter instead. There are a lot of choices that can work.
Arrange your resistor selection to be in series. Your approach could set a higher voltage than you intend if more than one switch is on.
If you are concerned about DC-DC ripple, you can post-regulate with an LDO (not the 317 - it has a high overhead), which is also where you would do a fine adjust. |
H: Why does the resistance of a brushed Dc motor from an experiment kit vary?
I am currently performing an experiment in which I measure the Current and Voltage of a motor that experiences changes in load. I am using a 3V (3V battery) supply however the voltmeter's (connected to the terminals of the motor) reading changes from 3V at no load down to 1V at max load. I understand how the current changes since less back-emf is produced at lower speeds. I also understand how using V = IR could result in that change of Volts. But wouldn't that require constant resistance and that is not the case with the values I have. For example, I tried calculating R using the equation and each time I got a different answer, Which itself increased in respect to the load. Is this a case of simple experimental error or am I missing something rudimentary? And if the voltage is changing is calculating power with
P = I*3(V) all through the experiment incorrect?
AI: A motor isn't just a resistor. It's not just a resistor and BEMF source either. You cannot use V=IR to try and calculate a motor winding resistance while it is spinning because a bunch of other variables are also present.
At the absolute minimum, your model should also include an inductance:
simulate this circuit – Schematic created using CircuitLab
simulate this circuit
You can then measure the current and voltage at the motor terminals at various supply voltages and RPM/loads to figure out what the kV (BEMF voltage per RPM), inductance, and winding resistance are.
But you will need to be able to measure the RPM for this since the impedance drop presented by the inductance varies with frequency which varies with RPM (and so too does the BEMF voltage).
But if you have a smart meter to measure things that is able to separate out the reactive voltage drop due to the inductance from the resistive voltage drop, it will still measure a higher resistance while running, and the faster it runs the higher this resistance will measure.
The rotating magnetic field induces eddy currents to circulate with the iron core itself. These only appear while the motor is running and increase with frequency (RPM). They manifest as a voltage drop and with all the characteristics I listed, at first glance you would think they appear to be just like the voltage drop due to the inductance. But they are not. They have one critical difference: They are still resistive losses that permanently remove energy from the system unlike the inductance which absorbs energy only to release it later in the future. So there is actually a fixed resistor in the motor due to winding resistance but eddy currents in the core also produces the equivalent of a second resistor which is frequency dependent.
If this is for a science fair you can try to weed out those parameters and if you do it well, you should be able roughly predict things how it will behave. |
H: simplifying RTD (resistance temperature detector) units
using a RTD, the norms give you dissipation constants in ohm per ohm per degree C, I dont understand as this should simplify to simply degrees C? what is the meaning of this unit?
Thanks!
AI: That is not "dissipation constant", it is the temperature coefficient of resistance.
What they are saying is that if you have a 100 ohm RTD it will be 100 ohms at 0°C and 138.5 Ohms at 100°C (using the most common DIN 43760 standard with \$\alpha\$= 0.003850).
So the average temperature change per degree C from 0°C to 100°C is 0.385 ohms, and that is 0.00385\$\Omega/\Omega/°C\$ (base resistance taken at 0°C).
Similarly, a 1000\$\Omega\$ RTD with \$\alpha\$=0.00385 will change from 1000 ohms at 0°C to 1385.0 ohms at 100°C.
RTDs are nonlinear so different temperatures and ranges (other than the commonly used 0°C/100°C) will result in slightly different numbers for the same RTD.
The three different numbers you have represent slightly different platinum alloys or made with different construction (thin film vs. wirewound on pure alumina etc).
You can use \$°C^{-1}\$ as the unit, the dimensionless \$\Omega/\Omega\$ are to remind you of what it represents. |
H: Does anyone know of a way to control two sets of switches at once?
I would like to control the two switches in the picture at the same time.
So for example when the left switch is in position 1 the second switch is also in position 1.
Is a 2-pole rotary switch what I'm looking for?
Also sorry if the way I used the schematics for the switches doesn't make since.
Basically I want the switches to select between different resistor branches.
AI: Yes, a 2P4T rotary switch will work. E-Switch and C&K are companies I have used.
The schematic does not show the resistor values. If they are above a few K ohms, then an equivalent CMOS analog switch could work. You still would need at least a 1P4T switch (or its elect4onic equivalent), but the switch now would handle control signals rather than the actual circuit signals. With this, the circuit performance might improve with the wire runs to the switch being replaced by much shorter pc board traces.
What is the purpose of R7-R10? |
H: Proximity sensor to measure object presence, speed and orientation
As depicted in the image, I'm looking for sensors to incorporate in a project where I have to detect object presence in short range (10-100mm), speed (passing from one sensor location to another, 0.1-10m/s), distance and orientation (just roughly). Space is fairly restricted, and I only have the option to detect object from one side. After a fair bit of research, it seems the only option I have is an emitter-and-receiver type of proximity sensor.
LIDARs draws too much power and only good in the long range. It might be an overkill for this purpose, and they also fall in a price range that I can't afford (USD 100+). Ultrasonic sensors are cheap but may fail detecting object distance if the orientation exceeds a certain angle, as shown in an experiment carried out in this video (VL53L0X vs HC-SR04 experiment).
VCSEL based Time-of-Flight sensor (VL53L1X ToF Distance Sensor) and IR Range Finder (Sharp GP2Y0A21YK0F IR Range Finder (Distance) Sensor) are affordable (USD 10-30) and accurate enough in short distance, and don't seem to be affected by color and orientation of the object. All is nice if the object only moves slow enough.
Detecting object presence at lower speed is not an issue, and so is reading its distance. Orientation can be calculated by changes in distance over time. But ToF samples only up to 50-60Hz, and IR Range Finder sensor has at update period of just 40ms (or 25Hz sampling rate). If the object is moving at its maximum speed of 10m/s (which is really not all that fast), a ToF sensor sampling at 50Hz (or 0.02s period) can only trigger a reading if the objects are at least 200mm apart.
Am I looking at the wrong sensors for my application? Or have I misinterpret the specs - as I would expect sensors taking advantage of the speed of light should operate at least some few orders faster than ultrasonic sensors.
AI: I have had good success using IRDA Rx and recessed emitters using a binary code to filter out interference up to 1m transmission and 1/3 m reflection at 100 kbps with 7 parallel sequential emitters and 7 detectors. The trick for my app was to have 1 deg beam width by choice of 6 deg emitter and recessed in a non-reflective hole to reduce aperture window further. .
But yours is a tricky optical problem that demands good eyesite with cross-eyed emitters at various angles to detect depth of an undefined object without getting distance walls reflecting more than the object and using signal threshold with AGC to capture the reflection fast enough to meet your acuity requirements.
Stereo Vision cameras or fast rotating mirrors such as those used by high value autonomous vehicles is the obvious choice, so I cannot give you any better solution than a plurality of BP narrow beam modulated carriers with a Bandwidth and fast AGC design to capture your objects using an array 1 deg apertures. We used a PIC CPU with 500 bytes of careful expert crafted code to detect moving objects with a signature shape which is different, and only 1m/s with parts as small as a resistor lead other as big as a shopping cart. |
H: The relations of the value of the constant of integration is zero between a DC component
I've been may asking the really easy question.
The circuit which was made of the AC source and the coil exists.
\$L:=\$self inductance of the coil.
\$v(t):=v_0*\sin(\omega t)\$ (The voltage between the terminals of the coil.)
\$i(t):=?\$ (The current which flows through the coil.)
We want to know the current which flows through the coil.
\$v_L(t):=-L\frac{di}{dt}\$ (The induced EMF which arises at the coil)
\$-v_0*\sin(\omega t)+L\frac{di}{dt}=0\$ (kirchhoff's voltage law)
\$L\frac{di}{dt}=v_0*\sin(\omega t)\$
\$\frac{di}{dt}=\frac{v_0*\sin(\omega t)}{L}\$
\$\int1\frac{di}{dt} \frac{dt}{1}=\int_{}^{}\frac{v_0*\sin(\omega t)}{L}\frac{dt}{1}\$
\$\int1*di=\frac{v_0}{L}\int_{}^{}\sin(\omega t)dt\$
\$i(t)=\frac{v_0}{L}\frac{(-\cos(\omega t))}{\omega}+C\$ (\$C\$ represents the constant of integration)
\$=\frac{-v_0}{\omega L}\sin(\frac{\pi}{2}-\omega t)+C\$
\$=\frac{v_0}{\omega L}\sin(\omega t -\frac{\pi}{2})+C\$
The below statement is the problem for me.
The textbook substituted \$0\$ to \$C\$ and states that the dc component is assumed as \$0\$ .
Currently I'm unable to get why the value of the constant of integration is zero and the meaning of dc component in this circuit.
Can anyone tell me some idea(s) or the website(s) which describe(s) of it?
AI: $$i(t)=v_0/(\omega L) \times (-cos(\omega t) + C)$$
Current cannot change instantly for an AC sinusoidal voltage supply ,so we can use $$i(t)=0 $$at t=0
And we get $$C=1$$ and hence total response would be
$$i(t)=\frac{v_0}{L}\frac{(-\cos(\omega t)+1)}{\omega}$$
$$\frac{v_0}{L\omega}$$ is DC component of the current .
Why we ingnore (DC component)at steady state analysis ?
for a pure inductive circuit we cannot ignore dc component even for steady state analysis but practically pure inductive circuit is impossible and small resistor is always present so the constant of integration for R-L circuit is $$A\exp^{(-R/L)t}$$
And at steady state it decays to zero ,and that why for steady state analysis of pure inductive circuit constant of integration is assumed to be zero |
H: Instrumentation amplifier output attenuation in LTSpice
My task was to design an instrumentation amplifier with given specifications and simulate it on LTSpiceXVII. We were told to only use one type of ac source, so to get the inverted signal I used a simple inverter design with unity gain. However, I noticed that keeping the resistance values small gave me the desired output, however increasing the resistance to large values, the output suffered drastically.
This is the circuit where I used 1kohm resistors in inverter.
And this is where I used 270kohm resistors.
I cannot figure out the reason why the simulation is giving me such results. The OP-AMPS are not totally ideal but still should behave ideally even for 270kohm resistors since the actual input resistance is of very large order. Any help would be appreciated.
AI: Your amplifier is saturating. You can't expect sensible results from the AC analysis if the amplifier is not behaving more-or-less linearly.
Doing a transient analysis before the AC analysis will make this obvious.
Your expected DC gain is almost 600, so about 20mV at the input will saturate the output with zero input. Calculate the DC offset from your 270K resistors and the typical input bias current and you will have your answer.
In a real design for a serious application you would have to consider the worst-case Ib which is about 6x worse than the typical. |
H: Instrumentation Amp design for breaking ground loops between systems
We're in the midst of designing a device for measuring analog signals and have been testing our initial prototype. We had a single-ended gain stage before the ADC, but due to the connections in the rest of the system we are experiencing ground loops through the signal cable. These are causing offsets and throwing a heap of noise onto the signal.
Our initial solution has been to use an instrumentation amp as a pseudo-diff amp with a pulldown on the inverting input, as shown below.
I understand R3 prevents the inputs from floating when no device is connected while allowing the GND of the two devices to be at different potentials. I'm having a hard time figuring out why the value is so low though (the design is recycled from an older device). The signal we're interested goes down to a few mV and offsets of 0.1mV are acceptable. With input bias currents in the nA range for IN-amps I would think R3 could be much larger.
This comes about because there is a use-case for having numerous (10-20) of these amplifiers in parallel. With 10-20 R3's in parallel the impedance between the two system grounds will drop to 5-10 Ohm and the 'break' in the ground loop will be much less effective.
What might be the reason for keeping R3 low? I'm hoping to optimize it for some larger value but first need to understand the trade-offs.
AI: Probably there are issues with the input common mode range of the inamp. They are very peculiar in that, depending on the internal architecture (often there are 2-3 pages in the datasheet devoted to the CM range in various condition of use).
The 100 ohm is often also a safety drain for remote ground lift: same issue but different purpose.
Maybe you need an insulation amplifier if you have these problems (they are not cheap) |
H: Understanding the curves of a MOSFET
I am trying to understand the curves of a MOSFET. Sorry if the question is very basic.
Where the red point is is the saturation zone of the MOSFET, therefore the source drain voltage must be 0V because at this point the MOSFET is saturated conduction at maximum current, because on the X axis of the graph called Vds marks 10V for the red point.
AI: therefore the source drain voltage must be 0v because at this point
the mosfet is saturated conduction at maximum current
No, you have this wrong. Maybe you were perhaps thinking of the BJT saturation region (when the collector-emitter voltage is close to 0 volts)? If so, then you'd be correct but, it's the other way round for a MOSFET - the channel is saturated rather than the base/collector on a BJT.
From Wiki on MOSFETs: - |
H: What exactly is electric potential (what's wrong with my picture of it)?
In my book, the potential at a point is defined as the work done on a charge to take it from infinity to the point against the field whereas the gravitational potential of a ball is the work done on it to take it to a height against the gravitational field
But the problem with this is that according to this, the potential of an electron will only depend on its distance from positive terminal(clarification at the end). So if an electron passes through resistance its potential drop should not increase because it's like saying that if a ball while falling encounters a wall and breaks through it then the drop in its potential energy will be higher compared to the same distance (thickness of wall) traveled through air, which is wrong
But this is also wrong. So what exactly is electric potential?
(clarification) I said that potential of an electron will only depend on its distance from positive terminal as, potential is the amount of potential energy stored in an electron because of force from holes(at positive terminal) comparing it to earth and ball will help so let's do it, here the holes are like earth that attract the ball(electron), and the potential energy is stored when we lift the ball to a height against gravitational attraction(recharging battery by taking electrons to negative terminal) now we can use the potential energy by letting the ball go and convert the potential energy into kinetic energy and we can use the kinetic energy to do some work (eg.letting the ball hit a box and make it move) we can also use potential energy stored in electrons to do work by letting the electrons flow ans using kinetic energy of moving electrons to do work but they are too small and move too slowly (few cm/second) to be useful that way, so we cleverly use relativity to make them do work how relativity makes magnets work (there is a problem in the video I will correct it below)
but the point is that electrons and holes only know how much far they are from each other (like ball's potential energy is only dependent on its distance from earth not on what is resistance of medium through which it is traveling) so how can potential energy of electrons depend on anything else like resistance of wire or something
Correction in video- probable question in mind after watching video will be that why when the positive charge (cat) is stationary, the moving negative charges don't contract and attract the positive charge(cat).
Answer to that question-When the cat is stationary, in its frame
the only things moving are the electrons
(and not the space between them). So the electrons get squished but the distance between its centers is the same, and the charge density remains the same.
However when the cat is moving everything moves except the electrons, in the cat frame. So the protons and the space between them contracts, effectively changing the charge density and creating an electric force in the cat frame, or a magnetic force in the stationary frame.
AI: Not gonna read your entire question, but seems the explanation of your book is overly complicated for no reason. When people cannot explain things simply, it usually means they don't understand them themselves.
If you relate to a cylinder filled with water that has a hole at the bottom:
The potential (Volts) is the "pressure" at that hole
The current (Amps) is how much water goes through that hole
The power is the multiplication of these two
The energy is the power over time.
I think this relates to your gravitational field but is handier to understand. |
H: Rated voltages for the Cage clamp connectors?
what does the rated voltage for a connector means? How should we select the connector for a 230V AC application?
Let us say can I use this connector for my 230V AV application?
https://www.phoenixcontact.com/online/portal/de/?uri=pxc-oc-itemdetail:pid=1904969&library=dede&pcck=P-11-01-05&tab=1&selectedCategory=ALL#Materialangaben-Geh%C3%A4use
AI: Since all of the ratings are above 230 V, you should be okay to use that connector. Each of the different voltages given are the connectors rating under those specific test conditions (nominal & surge ratings). |
H: Would a MOSFET driver be a suitable replacement for this circuitry and would it reduce power dissipation in the FET?
This is a follow on from a previous question here.
I asked if replacing a MOSFET with a lower RDSon would reduce the heat generated by the MOSFET and the answer I got was that FETs with lower RDSon have higher gate capacitance.
This would increase the time it takes for the FET to fully switch on and could potentially increase the power dissipation.
My follow on question is this:
Would replacing the circuitry before the FET with a dedicated high-side driver provide any benefit?
I would be looking to decrease the switching time if I were to use a FET with lower RDSon (and in turn higher gate capacitance?)
In order to control the MOSFET from a microcontroller pin, there is already a level shifter from 3.3V to 12V which in turn then goes to a push-pull transistor configuration. This seems a fairly solid design to me and I'm not sure if there is any benefit in changing this for a driver. A cost increase would not be an issue in this design. Any direction would be appreciated.
Full circuit details are in the previous question, but here is the schematic as requested:
AI: PWM switching purposes (VGS @ -12V, PWM is 20% @ 200us). The FET is switching in to a 200uH, 10ohm load
There is no freewheel diode, so when the driver brings the gate to +12V to turn off the FET, current in the inductive load keeps flowing and brings down the FET drain to whatever voltage is negative enough to exceed its Vds rating and make it avalanche. This means most of the energy stored in the inductor ends up dissipated in the FET.
Solution: add a freewheel diode, like a schottky or a fast rectifier. Either between pins 5 and 6 of the connector (lower losses, slower current drop at turnoff), or between drain and ground (higher losses in the 10R resistors, faster current drop).
At 10kHz there is no need for a special FET driver. If it doesn't switch fast enough, decrease value of R25 to give more base current to the BJTs. But don't make it switch too fast, you're not making a 300kHz switching converter, so there's no need to make more high frequency noise than necessary.
EDIT:
Since this is to generate a magnetic field for a metal detector, I wonder if the avalanche bit is actually part of the design, to get the current and magnetic field to fall very quickly. Here's a simulation:
In this case, a freewheel diode would completely screw things up by removing the sharp edge.
EDIT: How to get rid of the heatsink.
I've used a NMOS because better high voltage devices are available. You can use a PMOS, it will have a bit higher losses, just flip the schematic upside down. In fact you can use your existing circuit since the only change is a cap and a ferrite bead.
Here it goes:
The FET turns ON.
Current rises slowly in load inductor L2, until it reaches the desired value.
FET turns OFF. L2 is now in series with C1, and we have a LC series resonant circuit. It does exactly one half-period at its resonance frequency, and quickly flips polarity of current through L2, which makes the desired quick change in magnetic field.
At the end of the resonance half-period, voltage on C1 is back to zero and current in L2 has reversed direction. The FET's body diode turns on, and the energy stored in the inductor returns back to the power supply.
It returns about 90% of the energy stored in the inductor to the power supply, not that bad.
L3/R12 model a ferrite bead that could be necessary to tame a burst of ringing when the FET body diode turns off, because FET body diodes tend not to have the best fast/soft recovery.
The resonant frequency of L2/C1 is not important, but the peak voltage is, and peak voltage depends on the value of C1. So, C1 value should be just right to have the peak voltage as high as possible to get a sharp change in current, but not high enough to push the FET into avalanche, since the point was to avoid dissipating power in the FET. A proper capacitor type should be used (probably high voltage film or C0G).
Note increasing the power supply voltage will decrease total power used, since after implementing this energy recovery trick most of the power goes in heating the internal resistance of the 200µH coil. A higher supply voltage means it will take less time to reach desired current, so while instantaneous power dissipation will remain the same, it will be shorter, so lower total amount of energy wasted.
Here is a simpler version, with a switch instead of the FET. There is no body diode, so the LC oscillation goes on, but if there was a body diode, it would stop when the FET drain voltage goes to zero, indicated by the arrow. |
H: Complex impedance calculation for resistors and coil
I have this circuit and I need to calculate the total complex impedance.
Note: the \$r\$ is internal resistance of the coil so how can we calculate the complex impedance?
The coil and the resistance \$R\$ are in parallel.
AI: Well, the impendance of this circuit is given by:
$$\underline{\text{Z}}_{\space\text{in}}=\text{R}\text{||}\left(\text{r}+\text{j}\omega\text{L}\right)=\frac{\text{R}\left(\text{r}+\text{j}\omega\text{L}\right)}{\text{R}+\text{r}+\text{j}\omega\text{L}}\tag1$$
Using the numbers, we get:
$$\underline{\text{Z}}_{\space\text{in}}=\frac{220\left(100+\text{j}\cdot2\pi\cdot50\cdot550\cdot10^{-3}\right)}{220+100+\text{j}\cdot2\pi\cdot50\cdot550\cdot10^{-3}}=$$
$$220-\frac{619520}{4096+121 \pi ^2}+\frac{106480 \pi \text{j}}{4096+121 \pi ^2}\approx102.893+63.233\text{j}\tag2$$ |
H: Trapezoidal wave rms value calculation
Does anybody know how to calculate this wave RMS value?
The answer is
I use Mathcad to prove it but I still can not get the answer.
Hope someone can give me help.
AI: Well, the general formula for the RMS-voltage is given by:
$$\overline{\text{I}}=\sqrt{\frac{1}{\text{T}}\int_0^\text{T}\left(\text{I}\left(t\right)\right)^2\space\text{d}t}\tag1$$
So, in your case:
$$\overline{\text{I}}=\sqrt{\frac{1}{\text{T}_\text{on}+\text{T}_\text{off}}\cdot\left\{\int_0^{\text{T}_\text{on}}\left(-\text{I}_0\right)^2\space\text{d}t+\int_0^{\text{T}_\text{off}}\left(\text{I}_\text{a}+\frac{\text{I}_\text{b}-\text{I}_\text{a}}{\text{T}_\text{off}}\cdot t\right)^2\space\text{d}t\right\}}\tag2$$ |
H: How do these FET's work back to back?
Self thaught electronics hobbyist, I might be missing/misunderstanding something obvious.
I'm designing a circuit and added a battery protection based on the Diodes Incorporated AP9101C (Product page).
I don't really understand how the FET's work in the "typical application circuit":
As I understand this (and since both sides are meant to be "ground", this is a bit confusing), these are two N-type, enhancment FET's. So,
source should be at a lower potential than the drain,
Vgs should be above a certain voltage to conduct.
According to the pinout and the operation description, DO is used to control discharge and CO to control charging.
Looking at the functional diagram, it seems no majory current is conducted through the chip itself, so whatever goes from/to P- from/to battery ground goes through the FET's.
However, whichever direction the current flows, there's always one FET that is "reverse biased"(don't know the exact term). As far as I understood, FET's are not meant to be used bidirectional.
So, in case the battery is discharging, (conventional) current is flowing from P- to GNDbat. In that case, Q1 is in the correct direction, but Q2 is not. In that case, is the current through Q2 flowing through the body diode (the diode from source to drain?). Or am I misunderstanding something?
AI: As far as I understood, FET's are not meant to be used bidirectional.
They sure can operate bidirectionally but, with one proviso; in the reverse direction, you cannot get the MOSFET to fully turn-off (because of the bulk diode) but, in all other respects, the MOSFET will turn-on to a low value \$R_{DS}\$ when the current is in either direction providing that the gate-source voltage (for N channels) is sufficiently positive: -
Picture above from here. |
H: How should I interpret this CAN bus protocol - J1939
I have been using J1939-21(Data link layer) many times, but now I'm confused.
First I know that a J1939 protocol is build up from a data frame and that data frame looks like this in practice.
<ID> <DATA OF 8 BYTES>
The rest handels the CAN-bus controller.
The ID describes what type of message it is and where it comes from and where it want to arrive.
The DATA describes 8 bytes of different 8-bit values from 0x00 to 0xFF. That's all! Super simple!
But how should I interpret this table with a J1939 protocul from this Scania site?
https://til.scania.com/w/bwm_0001161_99
If we begin with the ID. Here we can see that the meaning if this message is 19FF and the message goes from 4A and want to arrive at 85. So the ID message is 0x19FF85FA in hex.
Now to the data. As I know, every CAN-bus message have 8-bytes.
So I assume that I would see something like this:
[Byte0, Byte1, Byte2, Byte3, Byte4, Byte5, Byte6, Byte7]
Example:
But I'm isint! Instead, I'm seeing this. What does this mean?
First, I know that this message are going to the ECU - Eletrical Control Unit, e.g PLC or controller. So this message is a R e c i e v e r message.
Then we have the bytes. Here I can see that we have different bytes such as byte 1 and byte 1. What??? I cannot understand this.
So here are some questions:
Is B y t e column only the index in [Byte0, Byte1, Byte2, Byte3, Byte4, Byte5, Byte6, Byte7] and Byte1 is acctully Byte0 in this indexing?
What is B i t ?
How can the message TelecomConnectionStatus (Telecom Connection Status) have the length 4 but 4 TelecomNetworkTechnologyInUse (Telecom Network Technology In Use) looks like it have more lengths because it has more rows?
I don't understand this. Do you?
AI: This is what happens when incompetent non-programmers are allowed to document raw binary data...
All programmers in the world, as well as the CAN standard, as well as the J1939 standard, all enumerate both bits and bytes starting with 0. But the incompetent person who wrote this document did not. They even managed floating point notation here and there.
Now how do we know that they name MSB bit 7 (or 8) rather than 1? Since we already established that they aren't using standard or engineering praxis, we don't...
As far as I remember, J1939 also requires little endian for word-sized chunks, but whether or not that applies here, we can't know either, since the document just writes 0xFE00 etc and we can't know if this is the raw byte dump from the bus or the numeric value after little endian has been applied.
So this so-called documentation will ultimately have to be accompanied with trial & error and a live bus listener.
We can only guess that they meant to say something like this:
Given the first byte of the payload, called byte 0 by the CAN standard and by engineers (1 in the document) lets say you have something like for example 0x02.
The 2 in that byte is called bit 0-3 to by engineers (maybe 1-4 in the document? we can't know). A guess is that this would represent TelecomConnectionStatus = Connected.
Then the MS nibble 0 called bit 4-7 by engineers (bit 5 in the document, maybe?) and represents TelecomNetworkTechnologyInUse = gsm.
You have to verify that this is so by connecting to a live bus. Then hopefully the document is consistent across multiple packages. |
H: Output power of HEMT
Reading a paper regarding a power amplifier based on HEMT technology, the following statement in made:
'The circuit was designed in WIN 0.1 μm GaAs process on a 2 mil thick
substrate,has power density of 850 mW/mm and its breakdown voltage
9V. According the process power density, a 4×25 μm device should
generate a saturation power of 85mW (19.3 dBm), so aiming for 0.5 W
(27 dBm) requires a combination of 8 devices.'
I don't understand how from the given power density and dimensions (which I suppose to be 'channel lenght x channel width' of the transistor), the saturation power ( 85mW ) is found.
Thanks to who can give me a hint
AI: 25 μm is the channel width, but 4 is not a 4-micron channel length. The phrase 4×25 μm ostensibly refers to four "fingers" of an interdigitated device(1), having a total effective channel width of 100 μm. Of course, since that's one-tenth of 1 mm, the power density and saturation power are consistent with each other now.
(1) - an interdigitated device is one where the structure is "folded", e.g. the following simplified structure with two effective fingers: |
H: 2-Layer PCB Through-Hole Layout/Soldering
So I'm designing a 2-layer PCB consisting of a bunch of ICs (op-amps, OTAs) and passive components. I would like to have my decoupling caps (little 100nF axial) as close to my op-amps as possible, so I've have the following thoughts.
My op-amp sockets are mounted on the front-layer of the board and will be soldered from the back layer, as per normal. My first thought was to mount the decoupling caps directly under the op-amp socket, but on the back layer (with pins poking through the front-layer), then solder them from the front layer.
But after some more thinking I don't think this will work, since the soldered pins of the caps would be sticking up directly under the op-amp socket, and might not make the header sit flush or flat on the front-layer, depending on the final soldered pin height.
Here's my solution to this. Mount the caps on the back layer and solder them from the back layer, then just pre-cut the cap pins so they don't poke through into the front layer. Two questions about this however:
Do PCB makers (JLCPCB in my case) put soldering pads on both sides of a through-hole board or just the opposite side of where a component is mounted?
Is it good practice to solder passive components on the same side they are mounted?
For reference, here's a Kicad pic of what I was thinking, in terms of mounting (TL074 on front side, caps on back side):
AI: Plated through-holes have metal not only on the far side and near side, but also on the inner edge of the hole all the way through the board. Only on single-sided boards or boards made by unconventional methods (such as milling) will you see pads that aren't plated on the inside of the hole.
Of course, this is all assuming your drill files specify the holes as plated. If you specify a hole as non-plated, then they won't plate it. This is down to how you use your CAD software, but unless you're doing things very wrong with making footprints, all holes where electrical connections are made should be plated through.
From the image you gave, it looks like you're using KiCAD; in KiCAD every through-hole pad is either a PTH pad (standard through-hole with plating on it) or an NPTH pad (which is referred to as "NPTH (mechanical)" in the pad editor, I believe), and this is a property of the pad that you select when making the footprint. All the footprints included in a standard KiCAD installation have the pads appropriately selected; PTH for all the ones that need an electrical connection or where you solder in a mounting pin, and NPTH for all the ones used solely for mechanical mounting with rivets, screws, plastic bumps, or whatever else you may need for a given package. |
H: Meaning of the terminology "referring to primary" or "referring to secondary" in the context of transformers equivalent circuits
I`m studying transformers equivalent circuit and encountered this term "referring" to primary or secondary and I was confused by this terminology. I tried to look for a definition but couldn't find any. Is it implying that, for example referring to primary, just means looking into the circuit from the primary coil?
AI: "Referring" is a term used in simplifying the equivalent circuit. It is sometimes useful to view the transformer load and secondary impedance as if the ideal transformer is not part of the circuit and the load and impedance of the secondary winding are multiplied by the transformer ratio and shown as part of the primary circuit. There are also times when it is convenient to show the primary impedance as part of the secondary circuit. That process is called "referring." In a way, it does mean looking into the circuit from the primary side or the secondary side, but part of that is adjusting the impedance values so that you have an "accurate view." |
H: Op amp output limited
I'm playing around with this op amp: https://www.st.com/resource/en/datasheet/ua741.pdf. My circuit looks like this:
It is hooked up to an Arduino Uno. I used analog read on the Arduino to read voltages at 2, 3, and 6. The pot allows me to vary the voltage at 3 from 0 to 5 V.
With R1 = R2 = 10K, I should have a gain of 2. I should be able to vary the voltage at 3 from 0 to 2.5 V and the output of the op amp should go from 0 to 5 V, right? I know I probably wont get quite to 0 or 5 V but pretty close.
However, the plot below is what I measure with the Arduino. As far as I can tell, the analog read value of 0 is 0 V and 1000 is about 5 V.
So the op amp appears to work but only in a vary narrow range. It looks like its pretty close to the upper limit, but doesn't get anywhere close to outputting 0 V.
Also, if I replace R1 with a higher resistance or R2 with a lower resistance, giving me higher gain, I get no response at all at the output. A3 remains at a constant value of around 800 no matter what the input is.
I'm new to op amps so I'm probably missing something obvious... Thanks!
AI: You can't run a 741 on a 5 V supply.
That op-amp design is > 50 years old and was designed for ±12 to 18 V supplies. The output can't swing within about 2 V of either supply rail so you should expect to see the output vary between about 2 V and 3 V. You seem to be getting a little higher.
Figure 1. Internals of the ancient 741 opamp. Source: Wikipedia.
Q14 pulls the output towards V+ but there is a drop of a volt or two across it so it can't get you to V+. Similarly Q20 pulls the output to zero when required but has a minimum of couple of volts drop across it. That didn't matter for analog circuits with a ±12 V supply as there was still plenty of swing room. With a 5 V supply you have almost none.
Buy a modern op-amp with rail-to-rail inputs and outputs. |
H: How to operate a lower power thermistor power circuit?
I would like to like to create a temperature circuit that connects to a wireless module (e.g. Bluetooth) to transmit the signal. My challenge is that I have a limited power supply (200 mA - 3.5V) to power both. Therefore, I would like to use as little power as possible for my sensing circuit.
Sensing circuit:
For my circuit, I would like to use a thermistor. Since thermistor resistance change with the change of temperature leading the sensing of the thermistor to draw more current. I was thinking of adding a large resistor in series to keep the current draw low since the voltage will be regulated.
Is that a good approach? what problem might it cause? Is there a better alternative?
AI: Using a load resistor is not only a good idea, it’s simple way to measure the thermistor with an ADC. The thermistor and load resistor form a voltage divider, so you measure the reference feed voltage, then the divider, to derive the thermistor’s resistance.
A way to save power is to only power up the reference feed voltage to the divider when you’re taking the measurement. Then there’s no standby draw.
Also you can use higher-value thermistors, such as 10K ohm. Then the required current is smaller. |
H: Can I load an external file into table for LT spice?
I would like to load a table (for table instead of editing the table manually) into LT spice, but it's very large (180x2 items). Is there a way that I can load a table into LT spice from an external file?
(I know I can load a PWL file from a csv into a voltage source, that isn't what I want to do, I want to do interpolation from a table)
AI: As long as the contents of the file has time-value pairs separated by comma or spaces, with or without newline (be it Linux/Unix, Mac, or Windows), it doesn't matter what extension it has. The pairs may or may not be enclosed within parenthesis. The time values must be incremental.
Examples of valid files:
1,2
3,4
5,6
...
1 2
3 4
5 6
...
1 2, 3
4, 5 6
...
1 2 3 ,4 5 , 6 ...
(1 2) (3, 4),(+0.1 -20) ...
All these will work. The spaces and commas are mostly as delimiters, their position doesn't seem to matter much (there may be exceptions, some caution applies). Don't forget that if there are sharp edges, instead of writing (e.g.) 12m 3 12.000001m 4 it's easier to use the relative increment 12m 3 +1n 4. The minus (-) is also available, but only for the values. As far as I know, there is no limit to the file size except your memory (I've worked with tens of thousands of pairs, it was relatively slow, but it worked). Also, it's not limited to PWL(), FREQ() triplets are also welcome.
table() can also be used, but a different approach is needed. By itself, it has no option to load external data, but it can be used as a SPICE netlist:
B1 out 0 v=table( ; or VCVS, VCCS, etc
+ <data-data pairs> ; '+' is needed to concatenate the lines
+ ...
)
This can be placed in a new file, then included in the schematic with .inc /path/to/some.file, or the extra lines can be appended to the already existent file with data. The advantage is that table() is not restricted to strictly increasing increments for the first elements in the data pairs. The disadvantage is the more cumbersome way of loading it. table(), too, has only linear interpolation. |
H: Where to start on creating radio circuits
I'd like to interface with a legacy system that uses BPSK at ~5MHz to transmit data. I understand the basic idea of BPSK - that it's a 180 degree phase shift to represent data - but I'm having a hard time finding examples that take an Arduino or something and create a BPSK radio.
Most of the documentation I find online shows a basic block diagram like this:
I understand we're mixing sine phases based on bits from a processor, how would I implement this with OpAmps?
AI: Get a copy of the "ARRL Handbook 2021" from your local library, it is a Six-Volume Book Set. It should show what you are trying to decode but you will have to interpret it as I do not believe it shows a connection to an arduino. This link has a lot more information: https://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-91.pdf Here is a link they did it with an Arduino: http://www.kk5jy.net/psk-modem-v1/ I used this "decoding BPSK data with arduino" as a search term and found lots of data. It has been a long time since I have done anything with BPSK, I hope this answers your question. |
H: Weird hysteresis behavior
I am an EE student and recently we worked with Schmitt triggers, positive feedback loops, and hysteresis. I was curious and hooked up an mcp601 without a feedback loop as such:
In the above schematic the pulse generator is behaving as a triangle generator. This is labeled Vin on the oscilloscope. Vout on the oscilloscope is the output voltage of the op-amp.
I was surprised to find that I was able to measure hysteresis even though there was no feedback loop:
Couple of questions here:
Is this due to the internal circuitry or is there a measurement error on my part?
If this is due to the behaviour of the op amp - is it an intended use?
AI: Your opamp stays clipped on one rail, then swings into clipping to the other rail. Most likely what you're seeing is its clipping behavior.
Ideal clipping behavior would be like this:
A real amplifier will never be this pretty. Here's an example of sticky clipping:
So it takes a while for any opamp to come out of clipping and return to linear behavior. On your scope shot, this looks like hysteresis, but I bet if you change the frequency of the sawtooth, you will notice what you think is "hysteresis voltage" will change, because it is actually a delay between the input crossing zero and the output reacting.
The cause of this delay is pretty simple, but difficult to eliminate. Here's a newbie CMOS opamp.
When it clips, FET T5 will be either fully on, or fully off, so its Vgs will be either 1) way above its normal operating point, or 2) close to zero. To come out of clipping Vgs has to be brought back to its operating point, but this takes a while because the FET gate is a capacitor and the only current available to charge or discharge it is the small output current delivered by the input stage. |
H: Wilkinson Power Combiner Feasibility
I'm trying to design a low-power passive communication system. It receives its power via wireless transmission from a nearby transmitter and to power a sensor which digitally modulated a signal via backscattering. The power calculated to the antenna @ 915 MHz with Tx and Rx gain around 9dBi was to be roughly 1.5 Watts or 1.76 dB. FCC regulations limit the directed power to 4 W maximum in the ISM Band.
I was considering using multiple receive antennas and using a Wilkinson Power splitter as a combiner to increase the power received, however, I know the theoretical power loss is 3 dB, which is more than what I receive.
My question is can I use the Wilkinson Power combiner in low power environments such as my example? Is there an alternate way to increase the power received without altering the antenna design?
AI: You are confusing absolute power (dBW) with gain (dB). Your received power is +1.75 dBW. A Wilkinson has 3 dB insertion loss so your signal would drop to -1.25 dBW (1.75 dBW - 3 dB).
When using a Wilkinson as a combiner, if both your signals are coherent (same magnitude and phase) you will actually get a 3 dB boost in received power. If you're receiving two +1.75 dBW signals, you'd see +4.75 dBW at the combined output of the Wilkinson.
However, this only applies if the signals are in phase. If they are completely out of phase, they would cancel and you'd receive 0 W (-inf dBW). |
H: Why the pinout on the Raspberry Pi Pico development board, are labeled under the board?
It may seem a stupid question, because it is common for many careless manufacturers to print the pin markings under a development board; but not for a company which is known for its user friendly designs and a big audience in maker and hobbyist communities.
When installed on a breadboard (which is intended use for same form-factor dev-boards), the labels are underneath the board. without a pin-out card or picture available at hand, we have to remove and find a pin and put it back on the breadboard. even for a permanent use, (soldering it on a motherboard), it would make the troubleshooting harder.
The PCB itself doesn't seem very tight. a little reordering of graphics or parts would make it possible:
Is there any specific reason for that? or it's just because the designer forgot to pay much attention to details?
AI: It’s a good design question.
Rookie designer maybe.
The only thing one could move alot is the button, but I guess they wanted it close to the USB port.
Notice how far away L is from Data+- and the trace width for low impedance. That’s important.
Look how close the DCDC converter is to the input. That too is important not to share ground noise.
They could have gone to under-sized font for the print screen of epoxy paint but risk smearing or gaps.
I would have gone for thinner Data tracks (lower L/C) with thin guard grounds on either side to make room for screened pin labels. Then the dielectric gap might have to decrease to raise C and match the increased L to maintain L/C ratio, which compromises stiffness. It’s possible with fine line coplanar capacitance to raise C but not as much as gap to parallel bottom side gnd.
So an extra mm or more to fit the screen print is possible for both sides and the logo is definitely oversized.
What would you do differently and maintain CMRR and trace impedance?
Also a lot of users longish mismatched cable impedances with ringing: and with 25 Ohm driver impedance, they could put shorted pads for 603 series resistors to match the high cable impedances and reduce ringing with a cut and place 220 to 300 Ohm R. |
H: Stackable PCB Sockets/Headers
I have two PCBs that I would like to connect/sandwich together via a 1x5 header on the bottom board and a 1x5 socket on the top board.
The components I have so far seem to work nicely, the only issue is the spacing between the boards. Due to some larger components on the bottom board, I need the spacing of the boards to be greater than 20mm.
I've seen a few board-to-board connectors out there, but they require you to solder both ends to each board, not ideal in my case since the boards need to be removable from one another.
What I need is some sort of "stackable spacer" header/socket combo, something that I can put between the header on the bottom board and socket on the top board.
Does anyone know if there is a product like this? I have thought of just using another socket like the one mounted on the top board, but after referencing the datasheet, the socket pins are too small to fit into the socket receptacles (by 0.1mm!)
AI: I see some 19mm headers , maybe you can find 25mm ones
Then for structural rigidity put a dummy header on the other side. Maybe you can find a lower profile part. Make sure you apply polyurethane to the body of the big part to avoid vibration fatigue to the solder joints.
https://www.mouser.com/datasheet/2/527/mtmm_th-1370249.pdf |
H: Can I run a VFD off a generator?
I want to power a three-phase 5hp well pump with a VFD. I want the VFD both to serve as a soft start, and to allow variable speed operation of the motor.
Two different pump salesmen have told me that running a VFD off a generator might hurt the VFD. Is this so? And if so, what is the mechanism by which generator power might hurt a VFD, but grid power does not?
EDIT: If the problem is variation in the generator's frequency or voltage, to what extent can that be mitigated by using an inverter-generator?
AI: Two different pump salesmen have told me that running a VFD off a generator might hurt the VFD.
It's not that simple. But I can understand why the saleman says it.
It often causes problems because the VFD is too big for the generator or is misconfigured, or the only load on the generator.
This can cause severe harmonics, or instability (governor and voltage regulator) if the VFD is not running simple UF mode.
Sometimes the VFD caps blow up. Sometimes it errors mains overvoltage/undervoltage. Sometimes other components blow up. Potential for damage is high.
So, when you do this, make sure you pay attention to:
Sizing. Try to get the generator 2 times larger that the total VFD load. This is a simple rule of thumb, more specific calculations are possible taking the subtransient reactance into account. Talk to your generator supplier. Ask them for numbers.
Drive switching frequency. As high as possible considering drive derating.
Drive control algorithm. Simple V/F mode, no vector control algoritms.
Drive ramp times. No sudden speed changes, the engine is slow and can't give you instant kilowatts.
Use a braking resistor for braking, not 4-quadrant operation.
There is no cost effective filtering available to correct for a harmonics problem here. That's why the salesman just says you can't, because it could become an expensive job. Lot's of pointing fingers. I've seen it many times.
The frequencies you want to filter are 3rd to 20th harmonics, where normal mains filters work beyond the 200th harmonic.
I wouldn't even consider using a VFD on an inverter generator. Those are so small, recipe for disaster. |
H: Small AC-DC conversion/amplification
So essentially looking for the most efficient and effective way to take a 3V 50hz AC signal and convert it to an 11-33V DC signal (any voltage between 11 and 33 works for an input to embedded pc) . I have looked into a couple different options such has bridge rectifier with a cap for small ripple then into amplifier or an open loop comparator with the rails set in the 11-33V range (positive and negative) then just do square wave rectification (no cap so no ripple) or even setting the high and low voltages of the comparator to say 15v low and 25v high which both register as high for the embedded pc digital input. (Not sure if this one will work, haven’t messed around with opamp comparators and not sure if the ‘low’ voltage can still be a positive DC value)
AI: I think the premise that the output is 3V AC is incorrect. But looking at the datasheet i can see where the assumption is coming from:
Source: Datasheet
Notice the output configuration is a three-wire connection with a SPST solid-state ac switch. This might be easier to understand looking at the wiring diagram, also provided in the datasheet:
Wire 1 and 3 are for the supply voltage of 20V-250V ac. The output signal of the sensor is basically the state of the switch either being ON or OFF. This output can be used with wire 4 connected to the load or in other words, your circuit.
The most important rating is the maximum output current of 300mA. I believe the "ON-state saturation voltage" of 3V that is mentioned at 300mA is just the voltage drop across the switch.
So in practice, the output voltage is approximately equal to the supply voltage in ON-state. Depending on the supply voltage, you could just use a regular rectifier for your PC input then. |
H: Why shouldn't I use a Zobel network to improve the power factor of an electric motor?
This is a homework question, so I would prefer something that would make me think and maybe promote discussion.
I understand the use cases for a Zobel network in an audio setup, but I struggle to see the connection to electric motors.
My guess is that we use Zobel networks to improve our power factor, but sometimes we wouldn't want to do that for an electric motor on purpose. If improving the power factor for a motor means improving its horsepower, I can understand how we would want to have some control over the horsepower of a motor in some applications.
If my household is set to 110V 60Hz, I am limited to how I want to control the horsepower of my ceiling fans, furnace fan, bathroom fans, etc.
But, maybe I am getting this all wrong. Why would I not want to use a Zobel network to improve the power factor of an electric motor?
AI: and maybe promote discussion
That's something we don't encourage on this site. I'll try and stick to answering your questions.
I understand the use cases for a Zobel network in an audio setup, but
I struggle to see the connection to electric motors.
Zobel networks waste power but in the grand scheme of things, for audio amplifiers, it's not a big deal and anyway, Zobel networks are largely a thing of the past in modern amplifiers. Modern amplifiers cope without a Zobel network.
Because they waste power, if they were applied to electric motors then you'd have two problems; the first is that the resistor in the Zobel network would be wasting as much power as is delivered to the motor AND, because a motor has quite a variable mechanical load, you would need to change the resistor in the Zobel network every time the mechanical load changes. Of course, that pretty much rules them out as being effective unless the loading is constant.
So, we use power factor correction instead and, attempt to select a parallel capacitor value that roughly makes the apparent power delivery to the motor as close to the real power delivery as possible. Because of this, power factor correction equipment has the ability to alter the capacitance value. It's also simpler with most AC motor applications because the frequency is constant thus, power factor correction is the preferred choice rather than a wasteful Zobel network.
My guess is that we use Zobel networks to improve our power factor
The wasteful Zobel network works effectively at controlling power factor across a wide range of amplifier frequencies but, its object is to make the loading on the amplifier (the speaker) appear more resistive across the spectrum. Hence it does do some power factor correction but, at the expense of high losses. Basically, old amplifiers used it to prevent loading effects causing sustained instability (oscillation due to negative feedback going wrong on inductive loads).
If improving the power factor for a motor means improving its
horsepower, I can understand how we would want to have some control
over the horsepower of a motor in some applications.
It doesn't improve the horsepower; it means that there is less reactive power delivered through the cabling network and \$I^2R\$ losses are reduced. That's why PF correction is employed. |
H: Could someone explain to me a simple opamp problem, please?
Could anyone please explain to me why a simple voltage follower misbehaves in certain configuration:
Opamp LM358 configured as voltage follower.
Working case:
Vcc 12V.
+IN: pulled up to 5V with 10k resistor.
OUT: steady 5V, perfect!
Problem case:
The same opamp.
Vcc 12V.
+IN: pin pulled up to 5V with 10k resistor AND A DIODE connected in series,
+IN: pin measured voltage - stable 5V.
OUT: Oscillations Vcc to nearly 5V at about 1KHz WHY?
AI: The diode can be reversely biased which blocks the base current of the input transistor stage of the opamp. The needed current can be small, probably only tens of nanoamperes, but it must have a conducting way. 10 kOhm resistor is ok, but reveversely biased diode isn't.
This TI datasheet image shows that the input current flows internally from +Vcc. There must be a conductive way from both inputs to the ground or to the negative supply or to a signal source which has voltage say 2V less than +Vcc.
BTW. Even the voltmeter can be conductive enough to at least stop oscillations. Check the output when there's voltmeter between the input and GND. |
H: AG9205S PoE PD Module and Raspberry PI
I have in my hand a AG9205s PoE PD module (output 5V and up to 13W) and I want to use it with my Raspberry PI 3B+ (that exposes the center core tap wires to the J14 pin header, as you can check here in the schematic)
Now comes my question: the AG9205s module shows that the external bridge rectifiers are needed as external part of the PoE PD module, as you can see here:
But at page 4 of the Datasheet, it says that the output of bridge rectifiers are connected to the respective internal pins, as shown below:
I'm a little bit confused...I've tried to connect the AG9205s module to the Raspberry Pi and works without any issue!
What am I missing? Are those bridge rectifiers inside the AG9205s module? Are they inside the LAN connector of the Raspberry?
Do I need to add those rectifiers?
AI: What am I missing? Are those bridge rectifiers inside the AG9205s
module?
No, they are not inside - they are required in case the supply polarity coming down the ethernet line reverses or is reversed. You basically just got lucky when you tested it and happened to connect positive to the correct pin (1 or 3). |
H: How to implement control of the bidirectional buck-boost dc/dc converter in software?
I am at the beginning of the development process of the control software for this dc/dc converter i.e. bidirectional buck-boost converter
I have studied the available literature and I have found that there are four possible operational modes of this converter
energy transfered from \$V_1\$ to \$V_2\$ in buck mode (in this mode: \$S_1\$ controlled by pwm, \$S_2\$ off, \$S_3\$ off, \$S_4\$ off)
energy transfered form \$V_1\$ to \$V_2\$ in boost mode (in this mode: \$S_1\$ on, \$S_2\$ off, \$S_3\$ off, \$S_4\$ controlled by pwm)
energy transfered from \$V_2\$ to \$V_1\$ in boost mode (in this mode: \$S_1\$ off, \$S_2\$ controlled by pwm, \$S_3\$ on, \$S_4\$ off)
energy transfered from \$V_2\$ to \$V_1\$ in buck mode (in this mode: \$S_1\$ off, \$S_2\$ off, \$S_3\$ controlled by pwm, \$S_4\$ off)
As far as the control strategy in most cases the cascaded control loops (outer voltage loop and inner current loop) have been mentioned in the literature
Let's say I will use this control strategy. A couple of questions arise in my head
How to determine whether the energy shall be transferred from \$V_1\$ to \$V_2\$ or from \$V_2\$ to \$V_1\$?
Can I use the same control structure mentioned above for both the directions of the energy
transfer and only change the controlled variable according to the direction of the energy
transfer i.e.
energy transfered from \$V_1\$ to \$V_2\$, then controlled variable is \$V = V_2\$ with reference value \$V^* = V^*_2\$
energy transfered from \$V_2\$ to \$V_1\$, then controlled variable is \$V = V_1\$ with reference value \$V^* = V^*_1\$
Let's say I somehow determine the direction in which the energy shall be transfered then
I need to determine whether the buck or boost mode shall be used. Can the control software made this decision based on only on the measurements of the V1 and V2 and their relationship i.e.
energy transfered from \$V_1\$ to \$V_2\$ and \$V_1 > V_2\$, then buck mode
energy transfered from \$V_1\$ to \$V_2\$ and \$V_1 < V_2\$, then boost mode
energy transfered from \$V_2\$ to \$V_1\$ and \$V_2 > V_1\$, then buck mode
energy transfered from \$V_2\$ to \$V_1\$ and \$V_2 < V_1\$, then boost mode?
According to the summary of the operational modes mentioned above each mode contains
several transistors which are permanently off. It means that the control software has to mask
the pwm outputs for those transistors. Is it possible to do this based on following table
energy transfered from \$V_1\$ to \$V_2\$ in buck mode, then mask pwm outputs for \$S_2, S_3, S_4\$
energy transfered form \$V_1\$ to \$V_2\$ in boost mode, then mask pwm outputs for \$S_2, S_3\$
energy transfered from \$V_2\$ to \$V_1\$ in boost mode, then mask pwm outputs for \$S_1, S_4\$
energy transfered from \$V_2\$ to \$V_1\$ in buck mode, then mask pwm outputs for \$S_1, S_2, S_4\$?
Thanks in advance for clarification.
AI: How to determine whether the energy shall be transferred from V1 to V2 or from V2 to V1?
Well that's your choice, it depends how you want to use it.
Suppose V1 is a 48V rail powering equipment and V2 is a 12S lithium battery, so 36-50V.
You could decide on an algo like: If mains power is present and the 48V power supply is running, use the buck-boost in CC/CV output mode to charge the battery. If mains power is lost, then use the buck-boost in the other direction to power the 48V rail from the battery.
Can I use the same control structure mentioned above for both the directions of the energy transfer
Yes, it is symmetrical.
and only change the controlled variable according to the direction of the energy transfer i.e. energy transfered from V1 to V2, then controlled variable is V=V2
A lot of use cases need both current and voltage to be controlled. For the battery example above, when it is charging the battery, it would work as a voltage-limited current source, so the inner current loop would need to be set to constant charge current suitable for the battery, and the outer voltage loop to max charging voltage. In the other direction, using the battery as a power source, it would regulate voltage on the output, but the current loop would still have to be set to respect the maximum allowed battery discharge current, for example.
Let's say I somehow determine the direction in which the energy shall be transfered then I need to determine whether the buck or boost mode shall be used. Can the control software made this decision based on only on the measurements of the V1 and V2 and their relationship i.e.
Nope ;)
If it transfers energy from V1 to V2 and V2 is close to V1 but just a bit lower, then you might think it is going to run in buck mode. But there will be losses in the converter, so you may actually have to use a little bit of boost mode even if output voltage is lower than input voltage, just to compensate for losses. And there will also be minimum and maximum on-time and off-time, can't have a 99.9999% duty cycle for example.
According to the summary of the operational modes mentioned above each mode contains several transistors which are permanently off.
Yes, but no, if you use bootstrapped drivers for your power devices, you have to switch them once in a while to recharge the bootstrap cap. The bottom FETs can stay on continuously, but not the top FETs.
All these constraints means "it's not that simple" as a Vin<Vout test.
It will also have to switch between continuous PWM mode and various discontinuous power-saving modes like cycle-skipping at light load.
I recommend reading the LTC3780 datasheet especially page 14. |
H: How to build large multiplexers using SystemVerilog?
Note: This question about multiplexers
is similar to but not identical to this question which is about demultiplexers.
I want to build a fully parameterisable multiplexer in SystemVerilog. So far, I know how to build multiplexers of variable widths like this:
module scale_mux #(parameter WIDTH = 1) (
input logic sel_a,
logic [WIDTH-1:0] in_a,
in_b,
output logic [WIDTH-1:0] out);
always_comb
unique case (sel_a)
1'b1 : out = in_a;
1'b0 : out = in_b;
default: out = 1'bx;
endcase
endmodule
I also know how to extend this for 4:1 and 8:1 multiplexers. But the number of case statements for large multiplexers will be a lot.
I'd like to build a multiplexer in which the number of inputs is variable. I'm not sure how I can extend this case statement for the n input case, do you have any ideas?
AI: module generic_mux #(parameter WIDTH = 1,
parameter NUMBER = 2,
localparam SELECT_W = $clog2(NUMBER))
(input logic [SELECT_W-1:0] sel,
input logic [WIDTH-1:0] mux_in [NUMBER-1:0],
output logic [WIDTH-1:0] out);
assign out = mux_in[sel];
endmodule
I used part select to select the input and route it to the output. The select line width is logarithm to the base 2 of the number of inputs. The select line width should be declared as localparam so that it won't be overridden. |
H: How to build large demultiplexers using SystemVerilog?
Note: This question about demultiplexers is similar to but not identical to this question which is about multiplexers.
I want to build a fully parameterisable demultiplexer in SystemVerilog. So far, I know how to build demultiplexers of variable widths like this:
module scale_mux #(parameter WIDTH = 1) (
input logic sel_a,
logic [WIDTH-1:0] in,
output logic [WIDTH-1:0] out_a, out_b);
always_comb
unique case (sel_a)
1'b1 : out_a = in;
1'b0 : out_b = in;
default: out = 1'bx;
endcase
endmodule
I also know how to extend this for 1:4 and 1:8 demultiplexers. But the number of case statements for large demultiplexers will be a lot.
I'd like to build a demultiplexer in which the number of inputs is variable. I'm not sure how I can extend the case statement for the n input case, do you have any ideas?
AI: module generic_demux #(parameter WIDTH = 1,
parameter NUMBER = 2,
localparam SELECT_W = $clog2(NUMBER))
(input logic [SELECT_W-1:0] sel,
input logic [WIDTH-1:0] mux_in,
output logic [WIDTH-1:0] out [NUMBER-1:0]);
assign out[sel] = mux_in;
endmodule
I used part select to select the right output to which the input has to be routed. The select line width is logarithm to the base 2 of the number of inputs. The select line width should be declared as localparam so that it won't be overridden. |
H: 5V Powerbank not turning on automatically when > 100mA are drawn
In this project, I power cycle a RaspberryPi Zero over a 5V USB powerbank, please have a look at my previous question:
Blocking current from Backup battery (3x1.5V=4.5V) when powerbank (5V) is turned on my timer switch
I use the TPl5110 as a timer switch and 2xAA as backup battery.
My circuit looks as follows:
The power bank is 10.000mAh, 5V, noname. EDIT: I've checked with the Varta consumer service and according to the datasheets of most powerbanks, they require a minimum current of 50mA -> this question is more universal than it initially looked.
The P mosfet is a NDP6020 with a low Vth of max 2V
The Diode is standard silicon 1N4001
According to the manufacturer, the powerbank needs a minimum of 50mA to be turned on/stay on.
I've set the potentiometer of the timer switch to 5min and I've verified it is working.
The problem is: after the first cycle, the RPI0 won't power up anymore since (for the 2nd and subsequent cycles) 0mA current (measured with multimeter) is drawn from the powerbank when the switch kicks in again. Clarification note, as requested: In the first power cycle, 130mA current are drawn (if I turn on the powerbank manually by pressing the on switch).
Can you see a flaw in this simple circuit that prevents current to be drawn from the powerbank on the power cycling events?
Please see the overall setup here:
AI: The power banks I've got don't deliver power until you press the button.
Press the button, and as long as the current stays above the minimum the power bank stays on.
If the drawn current drops below the minimum then the power bank shuts off and you have to press the button again.
In other words, measure the voltage at the power bank output. I expect you'll find it to be zero when the Pi fails to power up. If the power bank isn't putting out any voltage, then you won't be able to draw any current from it.
This is why one project I am working on bypasses the 5V output and taps the cell in the power bank directly. To turn the power bank on again, I'd have to open the case and add something to "push the button" to get power on again. That means an extra battery (like you have,) modifications to the power bank, and wires coming out of the power bank.
I found it simpler to just go ahead and bypass the 5 volt output and use the lithium cell directly with my own external regulator.
Make sure you connect to the correct side of the low voltage protection circuit for the cell (in the power bank) if you go this route. |
H: 8Mbps power line communication with PLC \ Homeplug
I'm trying to design a circuit to transmit data over the power lines. The data I'm trying to send is real time FFT of about 100k samples per second, for 32bit data this comes to about 3.2Mbit data per second.
I have found various ICs that implement a PLC communication front-end but the maximum speed they can deliver is about 500k baud/sec. Now I know there are various Ethernet power line communication modules available commercially, so my question is: how is this implemented if not with a dedicated IC.
I was unable to find any specific information about this on the web. Have not searched for books yet though. Do they use specialized ICs not available to the general public?
AI: The problem is the other way around: most of the general public doesn't want to buy homeplug compatible modem IC's and so most of the IC's are restricted to manufacturers like broadcom\qualcomm who mostly deal with higher volume customers. (Like the BCM30321).
Look for homeplug IC's and IEEE 1901 which is the standard. Homeplug (the not green version) goes to 200Mbps. (also the 200Mbps is assuming a good pathway, which is not always the case with home/commercial wiring and YMMV as this rate is negotiated and depends on the noise on the lines and distance between devices)
There are also some non monolitic IC options here
To actually implement a product around one of these IC's would probably take 2-3 people just because of the time and effort it would be a lot to take on. |
H: RC circuit with a constant current source
I am trying to simulate a series RC circuit excited with a constant DC current source in LTSPICE. The expectation is that it should linearly charge the capacitor with a constant current flowing through it however what I am seeing in the simulation window is a zero or nearly zero(femto ampere) current always through the capacitor. Can you please help to understand where I am going wrong? Attaching images for my simulation.
P.S. I am using basic Capacitor in ltspice, no specific model from it's library.
AI: LTspice calculates the DC operating point before starting the transient simulation. If you check the capacitor voltage at the beginning of the simulation you'll probably find that it is 1GV. The capacitor voltage after a very long time would, in theory, increase without limit with a 1mA current source feeding into it and no parallel resistance, so the DC operating point makes no sense in this case.
If you name the node at the top of the capacitor Vcap and add an initial condition:
.ic V(Vcap) = 0 as a spice directive you'll get a more sensible answer.
You can also set the initial condition for currents in inductors, which is useful in the analogous situation where a voltage source is connected across an inductor (though LTspice tries to save you from this by inserting a hidden default resistance of 1m\$\Omega\$ in series with the pure inductance). But if you start off with a 1V source across an inductor you'll still end up with a constant 1000A flowing rather than the linear increase of current with time you might have been expecting. |
H: How Do These Kind of Power Switches/Systems Work?
Just about all power switches in DIY electronic projects, such as Arduino, simply cut off the power source immediately when you turn it off. However, there are devices like 3D printers that when you push the power button to turn off, it does a few last tasks before the power actually cuts off. These tasks include running the motors to specific positions and writing a goodbye message on the display. How do these type of power down buttons/systems works? I don't know the exact name for these types of power buttons as well. If someone could explain what they are and how they work, that'd be greatly appreciated!
AI: As others said, such a button that does not shut down a system (like a 3D printer), it is not a button that is connected to the main power of the system. It is connected to an microcontroller that senses the push of the button. Like this:
simulate this circuit – Schematic created using CircuitLab
SW1 switch is what you press. This is seen by an MCU (microcontroller) and the MCU turns the M1 mosfet off (by outputing 0V to the Q1 transistor) and the "System to be powered" turns off.
A "disadvantage" of this system is that your MCU needs always power to sense when you will press the SW1 to turn off the system, so a little bit of energy is wasted on that MCU. |
H: What is the reason why USB plugs connect power before data
I am designing USB unplugging/plugging hardware and I wanted to know why USB plugs are designed such that the power leads make a connection before the data. Is it purely to ensure that data is not able to connect at the same time and/or before the power is connected?
Because I'm not actually physically unplugging the connector, is the pin offset connection delay purely because of mechanical issues, or is it ok if they connect at the same time because there are no mechanical interactions happening with my hardware (it's all done through ICs/FETs). My current version (without a delay) works, but I'm really wondering if it would be better (more in spec) were I to implement a delay when reconnecting power and data.
AI: It's actually more extensive than that. The USB shield connection makes contact before any of the pins. After this the power pins connect, followed by the data pins.
The reason for this is that USB supports hot-plugging. This presents a number of issues to the electrical circuitry. The first is that there may be a large electrostatic potential on the device being connected and if this discharges, even through the power pins, it could damage both devices. This is why the shield connects first.
After this the power pins connect and this is done to ensure the device powers up first before voltage is presented at the data pins. Otherwise it is possible for current to leak back through the data pin input structure, which could damage the hardware and can cause unreliable startups.
If you are doing everything electrically then none of this is a problem and you should be fine to just connect everything at the same time. |
H: Should I add a resistor between _every_ WS2812b pixel?
I have some strings of WS2812b 'Neopixel' LEDs. Some are individual pixels separated by short lengths of cable; others are clusters of six or twelve pixels separated by slightly longer cables. The cable will be fairly chunky (up to 1.5mm²).
Adafruit's best practices for their Neopixel LED strings say:
Place a 300 to 500 Ohm resistor between the Arduino data output pin and the input to the first NeoPixel.
And this question gives some insight into why this is valuable. My question is: is it a Good Idea, Bad Idea, or an It Depends Idea, to include a resistor between the pixels, or at the start of each pixel cluster? Does it depend on how long/thick the connecting cable is?
AI: These things were specifically designed to be daisy-chainable. So I doubt it makes anything worse, and it wont make anything better.
The series resistor serves different purposes when you ask different people, and a lot of the answers you get are demonstratably false. That simply happens because it's a very popular component with hobbyists, so there's very many people writing about it. Even experienced engineers are sometimes wrong, and you're drawing many samples from a random distribution of answer qualities, so you'll get some bad ones.
So, the WS2812 does not need a resistor on its input. Full stop. The datasheet even has an exemplary schematic without one. It's still a good idea to have one between your 5V data source and your first WS2812, simply because:
These things are often used in a long chain, which can very quickly change very sharply in current draw.
Sharp increases in current draw often result in a significant drop in supply voltage as fed e.g. from a separate power supply, or a capacitor close to the chain. Similarly, sudden dips in current draw can lead to surges.
Now you have a voltage difference between the first WS2812 and the supply voltage of whatever you use to feed bits into it. This can, and has in the past, led to undesirable situations where e.g. the data voltage is higher than the supply voltage of the first WS2812, which means there'll be significant current flowing from the data pin through protection diodes in the WS2812, posing a problem to the source (microcontroller) and/or to the sink (WS2812) of that current.
The undervoltage condition can also happen if you've got a big capacitor that needs to charge up through some cabling at poweron, while the MCU is already happily running at full supply voltage. Dead on power-on pixels can result from that.
Since under regular conditions, the input of the WS2812 is high-impedance, adding another couple hundred ohm in series doesn't make a difference, but will reduce any such unwanted current drastically.
However, when you have many of these things in a string, they all share the same supply anyway, at least when they're "neighbors". You're not winning anything, you're making your circuitry more complex and thus error-prone: I wouldn't do anything that I don't need to do. This is engineering, not magic :) |
H: What is the best low power wireless transmitter?
I would like to send a sensor signal (e.g. Temperature) wirelessly to a computer for a short distance (e.g.3~5 meters). The challenge is I have a very low power source (3V 50mV).
I am thinking of using a
Bluetooth chip: (Good quality signal, but I feel that the chips that I found online are power-draining)
Antenna: with the amount of power I am delivering noise might be an issue if I am it is impossible to operate of high frequencies.
What is the best way to transmit my signal from the sensor wirelessly with as low power as possible?
AI: My preference for sensor applications like temperature and humidity that don't need fast updates is a MICRF112/MICRF211 pair. The MICRF112 draws < 1 uA in standby and pairs nicely with just about any low power microcontroller you like. In the right setup you can get months of use from a couple AAA batteries. |
H: Understanding high-speed Improvements to discrete BJT multivibrator
What exactly are the Zeners doing in this circuit?
I've been trying to improve my understanding of BJT's, by trying to 'improve' the classic astable multivibrator. In an effort to 'sharpen' the transistions, I included a Zener diode between the capacitor and the base of the transistor. My thinking was to 'protect' the base from the capacitor charge curve until it was well past the ~0.7V threshold. What ended up happening instead was an increase in oscillation frequency, and an increase in oscillation stability with very small capacitance values.
With the 'classic' multivibrator circuit, I can only get to about ~100kHz. With the Zeners (as well as R7,R8 and R9), I see stable oscillations well into the low MHz range.
Can someone help me understand how to analyze how the Zeners are working?
[The breadboard circuit performs very similarly to the simulation in LTSPice, in this case, about 3MHz with rise/fall times <10ns]
AI: I believe it's providing a path for the stored base charge to discharge. Basically turning the capacitors into "speed up capacitors" in addition to timing caps. It could be good to plot the current through the zeners to confirm.
a cap starts off discharged and starts charging up towards VCC when its transitor is an open switch
the base of its transistor eventually rises to turn on (thanks to r9), and closes its switch.
the diode side of the cap suddenly gets pulled low, making the other side a negative voltage.
the zener is now in reverse breakdown and pulls the attached base negative - sucking out all the base charge for a rapid turn off.
Repeat
The next question could be, "does it have to be a zener?" I keep oscillating back and forth as to whether a regular diode would do the same thing. Since you don't use the reverse breakdown part of the zener diode, maybe it doesn't need to be a zener diode. Plotting current through it will help illustrate that.
EDIT ha! I originally wrote that it had to be a zener, and as I was editing my answer to say it doesn't have to be, you went and showed that it does get faster with lower voltage zeners. Ok, so there's still something to think about! |
H: Should I get CE mark on my Arduino project?
I just finished my big Arduino project after 2 years. I would like to start selling it, but I am not sure if I need CE certification or not. I used the below components:
Arduino Nano-oled 0,96"
keypad
simple buttons
TP4056
2-5V DC-DC converter
NRF24L01 2.4 GHz module
I am not sure if I need a CE certificate. Can somebody can help me, please?
AI: Simple answer is: YES.
If you want to sell a product in the European common market, then you need to have it CE certified. Since your product has a radio module in it, then the regulation you should abide to is the RED (Radio Equipment Directive).
You can find more information about the RED here.
There's links there for the applicable regulations and standards. The RED guide, which can be found here can also be useful.
Being very short you'll need to make three kinds of certification tests:
EMC (EN 301 489-17)
Safety (EN 60950-1 + EN 50371)
Radio (EN 300 328)
By the way, due to Brexit, since January this year, if you want to sell in the UK, now you also need the UKCA marking as well.
You have to find a way to perform these tests and provide proof of compliance. You can seek a certification company to help you out with the tests and the necessary documentation.
There are some exceptions depending if you want to sell your product as some kind of development platform. But that's a greyzone, there's no simple answer if want to follow that path. Besides, it's not without necessary due diligence and needed documentation. |
H: Using voltage divider vs using a DC-DC converter
I have a DC-DC converter that with a Vin of 4.75V-14V and a Vout of 18V.
Unfortunately I have a Vsupply of 24V.
Should I use another DC-DC converter with a Vin capable of 24V and a Vout between 4.75V and 14V or should I just use a voltage divider to to step 24V down to an acceptable voltage.
Are there advantages or disadvantages for either?
Is there a problem with using too many DC-DC converters in series?
AI: It depends on how much current the Vo (18v) will require.
Typically a divider circuit uses resistors that are least 10x smaller than the load impedence. Example, for a 1Mega-Ohm input that needs to be set to a specific voltage, the divider resistors should be no larger than 100k.
Nearly free
Easy
Has no ability to regulate the output.
Horribly effeciency.
Use only for driving <1mA load typically
Use only when Vin is constant
Use only when the load is constant.
A linear regulator acts like a variable resistor that maintains a constant output voltage.
It's efficency is dependent on the ratio of Vin/Vout and the load impedence.
It can range from 0% to 100% efficient (theroretically) depending on these critea.
0% to 100% efficient (theroretically)
Cheap
Simple
Clean output voltage
for current range <10A typically. Typically you don't want to use these if your sourcing a lot of current because they can be ineffecient.
Below is a simulation of the effiency of a divider circuit vs a linear regulator. The load is swept from 1 to 1Meg Ohm. The load value is time(s)*10 on the Xaxis. The linear regulator effeciency ranges from ~%75 to %20, while the divider circuit stays at %6.6 effeciency (using divider resitors <(Rload/10))
DCDC uses a totally different principle. I'm not going to explain how they work but here are the highlights.
%80-90 efficient (%70 minimum)
Can step voltage both up & down
Almost any current & voltage range if you have enough $$$
Output voltage can be noisy.
produces EM noise.
Can be expensize
Takes up more board space.
You can put DC-DC converts in series, as many as you want. But, if your only doing it just to get to the right output voltage there are better options: Adjustable Linear-Regulators, Adjustable DCDC converters. |
H: Does anyone have experience with the recom rpx-1.0 or similar that could explain a few things to me?
I'm looking to have 4 dc-dc converters each with different outputs but one voltage input value. Vout of 9V 12V 15V and 18V.
Would this chip be able to do that?
Here is a bit from the data sheet and I believe that to change Vout all I have to do is change R4, is that correct?
Also I'm unsure of what PG, AGND and PGND are in the below picture. Could someone clarify?
I know DC-DC converters separate gnd from two circuits so can AGND not connect to PGND?
Below is also a link to the data sheet if that would help to answer my question.
https://www.mouser.com/datasheet/2/468/RPX_1_0-1903909.pdf
AI: PG is power good - an output that tells you the regulator is happy.
AGND (analog gnd) and PGND(power gnd) follow the datasheet’s pcb layout suggestion. This dc:dc converter is not isolated, so the gnd is not broken between the input and output. You normally pay more for isolation.
Yes, select R4 for the required output voltage. You’ll need to ensure that the capacitors are suitably rated for your various voltages and heed the datasheet’s advice on layout |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.