text stringlengths 83 79.5k |
|---|
H: Is it necessary to take into account the actual state of a combinational circuit's signals to calculate its maximum delay?
I'm doing an exercise in which I need to calculate the maximum delay of a 1-bit full adder. In a full adder, the slowest path is the carry_out. Here is how I have designed it:
Let's suppose all gates have a delay of 50ps. Then the slowest path would be the one that goes from c_in to c_out through those two AND gates. Thus, 4 * 50 = 200ps should be the maximum delay.
However, I have implemented this circuit in Verilog, and I find that it is not that slow. Here is the implementation and the test bench:
Full adder
`timescale 10ps/1ps
module full_adder(
input wire a,
input wire b,
input wire cin,
output wire o,
output wire cout
);
//
// cout
//
wire ncin, net0, net1, net2, net3;
not #(5) n0(ncin, cin);
and #(5) a0(net0, ncin, b);
and #(5) a1(net1, a, net0);
or #(5) o0(net2, a, b);
and #(5) a2(net3, net2, cin);
or #(5) o1(cout, net3, net1);
//
// o
//
wire net4;
xor #(5) x0(net4, a, b);
xor #(5) x1(o, net4, cin);
endmodule
Test bench
`timescale 10ps/1ps
`define WAIT_DELAY 16
module full_adder_tb;
reg a = 0;
reg b = 0;
reg cin = 0;
wire o;
wire cout;
full_adder fa(a, b, cin, o, cout);
initial begin
$dumpfile("full_adder_tb.vcd");
$dumpvars(1, full_adder_tb);
cin <= 0; a <= 0; b <= 0; #`WAIT_DELAY;
$display("%d;%d", o, cout);
cin <= 0; a <= 0; b <= 1; #`WAIT_DELAY;
$display("%d;%d", o, cout);
cin <= 0; a <= 1; b <= 0; #`WAIT_DELAY;
$display("%d;%d", o, cout);
cin <= 0; a <= 1; b <= 1; #`WAIT_DELAY;
$display("%d;%d", o, cout);
cin <= 1; a <= 0; b <= 0; #`WAIT_DELAY;
$display("%d;%d", o, cout);
cin <= 1; a <= 0; b <= 1; #`WAIT_DELAY;
$display("%d;%d", o, cout);
cin <= 1; a <= 1; b <= 0; #`WAIT_DELAY;
$display("%d;%d", o, cout);
cin <= 1; a <= 1; b <= 1; #`WAIT_DELAY;
$display("%d;%d", o, cout);
$finish;
end
endmodule
After the test runs, I double check that the output values match the expected with a Python script. I was expecting WAIT_DELAY to be required to be 20 (200ps) for the test to pass; however, 16 (160ps) is enough.
My test hits the error when the inputs {cin, a, b} transit from {0, 0, 1} to {0, 1, 0}.
My understanding is that this happens because the actual worst case is not the sum of the gates mentioned above, because depending on what was the previous state, and what is the current one, the gates might need less time to get stabilized because some signals will be 'don't care'.
For instance, if {cin, a} = {1, 1}, cout will be always 1 even if {b} starts oscillating every 1ps, because the last or gate will be like 1 + X = 1 (where X is an invalid state).
In other words, it might happen that the actual critical path is not the one made of 4 gates, because there is no combination of inputs that make all of those gate's outputs to change.
Questions
Am I right in the above reasoning?
If so, is this valid out of the simulator, in real life, or can the X input of the above example's or gate 'pertubate' its output?
I imagine it will be better to use the worst case in terms of the sum of all the critical path's gate's delays because it is easier, safer and less error prone. Is this correct?
AI: Yes, it is necessary to take into account the actual state of a combinational circuit's signals to calculate its maximum delay.
The problem with your testbench is that it is too simple. While it accounts for all 8 combinations of the 3 inputs, it does not account for all possible transitions between input sets. Since it's hard to think up all possible combinations, we let the simulator do that for us. The customary way is to randomize the inputs and run it for a large number of combinations. The compact way to do that is to use a repeat loop.
Since we've now made the input more complicated, it will make it more difficult to check the results with your Python script. But, this is simple enough to check in Verilog in the testbench.
`define WAIT_DELAY 22
module full_adder_tb;
reg a = 0;
reg b = 0;
reg cin = 0;
wire o;
wire cout;
event chk;
full_adder fa(a, b, cin, o, cout);
initial begin
$dumpfile("full_adder_tb.vcd");
$dumpvars(1, full_adder_tb);
repeat (100) begin
{cin, a, b} = $random;
#`WAIT_DELAY;
end
$finish;
end
initial begin : check
reg c_exp, o_exp;
#(`WAIT_DELAY - 1); // Check outputs 10ps before next input change
forever begin
->chk;
{c_exp, o_exp} = a + b + cin;
if ({cout, o} !== {c_exp, o_exp}) begin
$display("%t error {cout, o}=%b {c_exp, o_exp}=%b", $time, {cout, o}, {c_exp, o_exp});
end
#(`WAIT_DELAY);
end
end
endmodule
This self-checking testbench passes when WAIT_DELAY is 22 or more, but it will fail if you set it to 20 or less.
The check occurs 10ps before the next input change. For example, when the delay is 220ps, the check occurs 210ps after the inputs changed. I added the chk event so you can easily see when the check occurs in waves.
When the delay is 170ps, for example, you should see errors something like (depending on randomization):
5260 error {cout, o}=00 {c_exp, o_exp}=10
6450 error {cout, o}=00 {c_exp, o_exp}=10
6790 error {cout, o}=00 {c_exp, o_exp}=10
7980 error {cout, o}=00 {c_exp, o_exp}=10 |
H: Is there a voltage across an ideal inductor?
I have been learning about LC and LCR circuits.
My question is about inductors themselves, more specifically ideal inductors with zero resistance.
If I disconnect an ideal inductor from a DC supply, there should be very high voltage spike across the inductor as per \$V=L\frac{dI}{dt}\$.
How can this be? The two ends of the inductor are connected via zero resistance through the coil of the inductor itself and therefore wouldn't the voltage across the two ends of the inductor always be zero?
AI: The two ends of the inductor are connected via zero resistance through the coil of the inductor itself and therefore wouldn't the voltage across the two ends of the inductor always be zero?
You are assuming that an inductor obeys Ohm's law. It doesn't. Ohm's law is a law for resistors. An [ideal] inductor is not a resistor, therefore it doesn't obey Ohm's law (and neither does a voltage source, a capacitor, a diode, a transformer, or any other device that isn't a resistor).
The "law" that governs the operation of an inductor is
$$V = L\frac{dI}{dt}.$$
You need to analyze the operation of an inductor using this law rather than Ohm's law. And this law tells you that in order for the switch to open instantly, there must be an infinite (delta function) voltage impulse across the inductor.
This is a case that sometimes two "ideal" components in a circuit produces a logical contradiction, like two ideal voltage sources in parallel, or a switch connecting two ideal capacitors.
In this case you can't assume that both the inductor the switch are ideal and get a physically meaningful result in your analysis. You must consider either the arcing behavior of the switch, or the interwinding capacitance of the inductor (or both) to correctly model the circuit without singularities. |
H: Connecting two-port network theory to microelectronic design (Sedra and Smith)
I am trying to connect what I'm learning in Sedra and Smith (I am currently in Chapter 7 of the 7th edition which talks about basic transistor amplifier configurations, eg. CS) to what I have learned in the past about two-port networks. In particular, I want to understand how what's going on in Sedra and Smith can be recapitulated in the language of two-port networks.
Now with the small-signal transistor models (let's stick with the MOSFET), we have only the transconductance gain \$g_m\$ and the output resistance (due to CLM) \$r_0\$. Now the general program shown by Sedra and Smith here is, for every amplifier configuration, to compute an open circuit output voltage \$A_{vo}\$, an input resistance \$R_{in}\$, and an output resistance \$R_{o}\$ defined by the equivalent circuit given in the attached picture.
That is, we see that we have the definitions
$$A_{vo} = \left. \dfrac{v_{out}}{v_{in}} \right|_{i_o=0}$$
$$R_{in} = \dfrac{v_{in}}{i_{in}}$$
$$R_{out} = \left. \dfrac{v_{out}}{-i_{o}} \right|_{v_{in}=0}$$
Now from this general structure I'm trying to connect this to a "set of two-port parameters". My inclination is to write a system of equations as below (given that \$v_{out}\$ is twice in a numerator, but \$v_{in}\$ is in a numerator and in a denominator):
$$V_{out} = -i_oR_o+A_{vo}v_{in}$$
$$i_{in} = \dfrac{1}{R_{in}}v_{in} +0i_o$$
That is, we assume no internal feedback in this basic model. The above seems to point to the so-called "g-model" being used (inverse hybrid). Is this an accurate reflection of what's going on?
Bonus points, for those who have Sedra and Smith...sometimes they distinguish between \$R_{o}\$ and \$R_{out}\$ and I have no clue what is meant by the difference. Any help would be greatly appreciated. Also, if anyone has any book recommendations for books like Sedra and Smith but which DO emphasize the important connection to two-port networks, please let me know.
AI: Now with the small-signal transistor models (let's stick with the MOSFET), we have only the transconductance gain \$g_m\$ and the output resistance (due to CLM) \$r_0\$.
If you use this model for the transistor, you are assuming a Norton output circuit rather than Thevenin:
simulate this circuit – Schematic created using CircuitLab
As you know, you can easily convert between the Norton and Thevenin equivalent circuits to see the equivalence between your "2-port" model and your "Sedra and Smith" model (which are actually both 2-port models, just with different forms)
Bonus points, for those who have Sedra and Smith...sometimes they distinguish between \$R_{o}\$ and \$R_{out}\$ and I have no clue what is meant by the difference.
It's been a while, but I vaguely remember that \$R_o\$ was the output resistance of an individual transistor while \$R_{out}\$ was the output resistance of an entire amplifier circuit...but if that isn't consistent with what you see in your book then I'm just mis-remembering the notation. |
H: Why does this method of measuring inductance work?
I recently watched this video from EEVblog showing a method of measuring inductance. I'm probably missing something obvious, but why does this work? Mathematical proof would be great.
Thanks.
https://www.youtube.com/watch?v=UrS5ezesA9s
AI: It is explained in the video but perhaps not as clear as it might be. The video shows how a multimeter can be used to measure inductance when used on its capacitance ranges. The assumption is that the multimeter is measuring the impedance of the device under test and then calculating the capacitance based on a known test frequency. Thus if the frequency is f, the measured impedance is Z, the capacitance (C) is calculated as C = 1/(2πfZ) based on the capacitive reactance (impedance) being given by Z =1/(2πfC). If one then measures an inductance, L, with the same multimeter on a capacitance range, the meter is actually measuring the inductive reactance which is 2πfL. So Z = 2πfL. Thus L is equal to Z/2πf. We see that to calculate the L from the multimeter's measurement of C, we need to take the reciprocal. This is what he did in the video. |
H: Low Power Consumption with Ultrasonic Sensor (US-100, HC-SR04)
I have designed a Zigbee sensor to measure the water level in the cistern. It should be operated with batteries, so the ultrasonic sensor should only be activated for measurement. With the current scheme I have a consumption of 0.08 mA in sleep mode. If I only look at CC2530, it consumes 1µA in sleep mode.
The measurement is done like this: The sensor goes out of sleep mode, pin 1.0 is set high, measurement is taken and pin 1.0 is set low and then back into sleep mode.
When I measure the voltage on the sensor (US-100) in sleep mode, I have about 0.4V. Unfortunately, I can't get any further with the troubleshooting :(
AI: US-100 works just fine from 3V, so there's no need for the boost converter. Just use US-100, not HC-04.
There's no need for the GND-disconnecting mosfet even when using U5 switcher.
P0.2 and P0.3 needs to be driven to logical zero when the sensor is turned off. Yes, P0.2 should be turned into an output and driven low when the sensor is not in use.
If you are OK using only US-100, then the following will do the trick:
simulate this circuit – Schematic created using CircuitLab
Sensor pseudo-code:
set_dir(P0_2, INPUT);
set(P1_0, LO); // enable the sensor
sleep(100*US); // wait for sensor to initialize
measure_distance();
set(P1_0, HI); // disable the sensor
set_dir(P0_2, OUTPUT);
set(P0_2, LO);
set(P0_3, LO);
If you insist on using HC-04 powered from 5V, then you'll want the following circuit:
simulate this circuit
Sensor pseudo-code:
set_dir(P0_2, INPUT);
set(P1_1, LO); // turn on the sensor VCC switch
set(P1_0, HI); // turn on the 5V switcher
sleep(1*ms); // wait for sensor to initialize
measure_distance();
set(P1_0, LO); // turn off the 5V switcher
set(P1_1, HI); // turn off the sensor VCC switch
// U2 will now discharge C5 and Q2 will fully turn off
set_dir(P0_2, OUTPUT);
set(P0_2, LO);
set(P0_3, LO); |
H: Finding Transfer Function of RL Circuit
Can anyone help me to find TF of this RL Circuit
AI: Just convert circuit to Laplace (multiply each L by s and then treat like regular impedances) and solve for voltage dropped across that lower right inductance using regular circuits methods.
Edit to answer questions AFTER OP posted his answer (2nd comment below my answer):
Here is what the transformed network looks like,
The parallel combination of the right two branches is,
$$\frac{(4+4s)(4)}{(4+4s)+4}=\frac{4(1+s)}{(1+s)+1}=\frac{4(1+s)}{s+2}$$
So, the voltage across the right-two branches will be,
$$V(s)\frac{\frac{4(1+s)}{s+2}}{\frac{4(1+s)}{s+2}+4s}=V(s)\frac{\frac{(1+s)}{s+2}}{\frac{(1+s)}{s+2}+s}=V(s)\frac{s+1}{(s+1)+s(s+2)}=V(s)\frac{s+1}{s^2+3s+1}$$
And the voltage across the bottom-right inductor will then be,
$$V_L(s)=V(s)\frac{s+1}{s^2+3s+1}\times\frac{4s}{4s+4}=V(s)\frac{s}{s^2+3s+1}$$
And finally, the transfer function you are after, \$\frac{V_L(s)}{V(s)}\$, will be,
$$\frac{V_L(s)}{V(s)}=\frac{s}{s^2+3s+1}$$
Check my math to make sure. |
H: Using resistor with 110 V LED indicator to make current measurement between 50 - 70 mA
I need your guidance with the V=IR formula. It's probably a simple question but I don't have an electrical background.
I am working on making an LED simulator panel for railway signaling interlocking to check the I/O of the signal lamp driving card.
The computer card that controls the signal LED has a current measuring functionality. The computer shuts down the card if the current value is <50 mA or >70 mA.
The small LED indicator for the simulator panel I found is 110 V with 6 mA max current. To bring the current value to between 50..70 mA, I am using a 1500 ohms resistance in parallel.
Any suggestions and will the circuit work?
Is there anything I should add to make it safe?
AI: 110V/1500R=73mA , so it sounds like this resistor will put you over the high limit even without the LED.
Aiming for 60mA, and subtracting the indicators 6mA: 110V/0.054A=2037R
2.05K is the closest 1% resistor value to this.
The dissipation in the resistor will be 110V^2/2050R = 6W. So it's going to have to be a power resistor and it will get quite warm (just like the original light bulb would have). |
H: Are these vias not plated?
I received my first batch of PCB's that were designed with thermal vias of diameter 0.61mm and hole diameter 0.305mm. When I inspect them, it seams that only around 10% of the vias have been coated. Can this be confirmed from the photo below?
Or are all vias likely plated, but they appear not to be, because they've been covered in lacquer or something?
AI: The vias are very likely just covered with solder mask, which makes them look as they were not plated.
Depending on production quality it may happen, that some vias are not perfectly covered.
If you want to be sure, you could just scratch off some soldermask and you should be able to see if its plated. |
H: Circuit to control a motor using relays
I want to do an automatic door for our chicken coop. The battery part is done (a small solar panel, an old car battery and a solar charger). The issue is with the door itself. I have a door actuator motor running at 12V and a 4 relay board using ESP8266 (https://www.banggood.com/AC-or-DC-Power-Supply-ESP8266-WIFI-Four-way-Relay-Module-ESP-12F-Development-Board-Secondary-Development-p-1794113.html?cur_warehouse=CN).
As far as I understand, in theory,
I can do that to control direction (open/close) of the door. Right?
Will it work in practice? I mean ... I'm a software developer as a job and I understand race-condition, what I'm afraid of .. is that the relays won't switch exactly in the same time and, in some cases, it will short the car battery. Am I wrong?
Thank you!
AI: If you connect the relays as drawn in your circuit, then there is no chance of shorting the car battery. Since the relays won't switch at identical times, you may have very short time periods where the current flow is not as you expect, but all the paths for the current go through the motor, meaning that there are no combinations of relay states that result in shorting the battery.
Back emf is something you may need to worry about though, so you should look into that. |
H: How can gain and phase margin be used to assess stability of SMPCs as for most of the cases (except buck), they always have a Right Hand Zero?
I was going through this Operational Amplifiers, Theory and Practice by James Roberge. On Page 146, it is mentioned that for systems that have negative feedback at low or mid frequencies and that have no right-half-plane singularities in their loop transmission, we can utilize a simple criterion that the loop transmission(af) i.e.
If the magnitude of af is 1 at only one frequency, the system is stable
if the angle of af is between + 180 deg and - 180 deg at the unity-gain frequency.
If the angle of af passes through +180 deg or - 180 deg at only one frequency, the system is stable if the magnitude of af is less than 1 at this frequency.
which later evolves into the widely used gain and phase margin parameters. However, for SMPC systems, except the buck (and buck derived topologies), most other topologies (say flyback) have a right hand zero in the transfer function like below
.
But I see that parameters like gain and phase margin are widely used to design compensators in this case. It looks like a contradiction as the condition that the phase of loop transmission being above -180 deg at crossover is valid only when right half-plane singularities are not present, which is not the case in above. Is this a valid way to design compensators? If yes, can someone resolve this contradiction (or any points I am missing) and if not, what are possible alternatives?
AI: On top of the good answer provided by LvW, the stability Bode criterion as we know it applies to a so-called minimum phase transfer function meaning that the expression does not include poles or zeroes in the right-half plane but also no pure delay. As pointed out by LvW, a minimum-phase function lets you reconstruct the phase from the magnitude plot and vice versa. This is described by the Kramers-Kronig relationship known as the Bayard-Bode law in French universities. When the transfer function includes a delay or a RHPZ for instance and you look at the phase response alone, then you don't know if the phase lag is due to a classical left-half plane pole, the delay or the RHPZ.
It does not mean the Bode criterion does not work for non-minimum-phase functions including a RHPZ but it has to be applied carefully considering the various phase shifts engendered by delays or RHP zeroes in the region where the system has gain, before crossover. In your case, you have reproduced the control-to-output transfer function of a switching power stage like a boost or buck-boost featuring a RHP zero. You should then adopt a compensation strategy which forces the maximum crossover frequency 20% of the worst-case minimum RHP zero position. If you try to crossover at an upper value, then you may encounter stability issues. As such, the RHPZ will likely always be pushed beyond crossover where the loop no longer has gain and phase lag is less important. As such, the Bode criterion can be safely applied.
In case of doubts on the stability assessed at a loop-gain magnitude of 1 (the 0-dB point), then you will have to resort to the Cauchy's argument principle applied to a Nyquist plot - see this article. Fortunately, the Bode criterion applies to the vast majority of switching converters and I have rarely seen engineers apply Nyquist to assess the stability of their converters for industrial and consumer usages. |
H: Gain and phase margin of multiple feedback bandpass filter
I designed a multiple feedback bandpass filter using the Analog filter wizard. The design is purely education therefore I used ideal op-amp.
Fc = 50KHz; DC gain = 6dB; Q = 5;
I followed the instructions included in the AD tutorial “Stability of Op-Amp Circuits” and simulated the circuit in the same fashion.
LTspice: Stability of Op Amp Circuits | Analog Devices
My recollection from control systems is that the negative feedback circuit is stable as long as we do not invert the signal by 180° with a magnitude equal to or greater than 0dB to prevent oscillation.
If correct my 0dB point is at 10Mhz and I have 93° of phase margin. Which is sufficient.
Additional circuit never reaches the -180° phase shift. However, at 90KHz the phase shift equals 164° with 22.7 dB of gain. I would imagine that it is a very low phase margin of only 26° with 22.7dB of gain.
I vaguely remember the “rule of thumb” for phase margin as 45° and gain margin as 20dB.
If the above statements are correct are they applicable for filters?
If yes how can I improve the phase margin? Would the introduction of a delay in order to shift the phase response be a good idea?
AI: The simulation results as shown in your problem description look weird (rising phase characteristics). I suppose, it is your intention to produce a Bode diagram for the loop gain, correct?
Your ac source (input signal at the inv. node) has an inverting polarity (causing the rising phase) - nevertheless, you can perform a stability check.
However - your interpretation of the results is wrong. The stability check requires to verify the phase response relativ to 360 deg (0 deg) and NOT to 180 deg. Note that the correct definition for the loop gain contains the signal inversion at the "-" opamp input. And the loop gain as simulated by you does, of course, include this signal inversion.
Remember: The oscillation condition (stability limit) is defined for zero phase shift of the loop gain function.
(In some documentation - in particular in control theory papers - the loop gain is used without the phase inversion at the summing junction, which gives rise to some confusion, as we can see here)
Therefore - the magnitude crosses the 0dB axis at app. 10 MHz with a phase shift of app. 90deg. Hence, the phase margin is 90 deg. |
H: VHDL, how to assign signal of different types to port map with constraint inference?
I want to assign signals of a testbench to a component to which the port have infered constraints. I would like to introduce the problem with a working workbench before moving to a minimally reproducible example.
Example with constraint inference
Assume A and B
A.vhdl (the testbench)
library IEEE;
use IEEE.STD_LOGIC_1164.all;
use IEEE.NUMERIC_STD.all;
entity A is
end entity;
architecture A_arch of A is
signal input, interm, output : std_logic_vector(10 downto 0);
begin
B1: entity WORK.B port map (
X => input, Y => interm
);
B2: entity WORK.B port map (
X => interm, Y => output
);
process begin
input <= (1 => '1', others => '0');
wait;
end process;
end architecture;
B.vhdl (component under test)
library IEEE;
use IEEE.STD_LOGIC_1164.all;
use IEEE.NUMERIC_STD.all;
entity B is
port (
X : in std_logic_vector;
Y : out std_logic_vector
);
end entity;
architecture B_arch of B is
begin
Y <= X;
end architecture;
As we can see, the size of B.X is inferred and everything compile just fine.
Example with different types
Again I would like to test B, except I would like to have X : signed and Y : unsigned instead. Their constraints are not (yet) inferred.
A.vhdl
…
B1: entity WORK.B port map (
X => unsigned(input), std_logic_vector(Y) => interm
);
B2: entity WORK.B port map (
X => unsigned(interm), std_logic_vector(Y) => output
);
…
B.vhdl
…
entity B is
port (
X : in unsigned(10 downto 0);
Y : out signed(10 downto 0)
);
end entity;
architecture B_arch of B is
begin
Y <= signed(X);
end architecture;
…
The minimally reproducible failing example
Again I would like to test B, except I would like to have X : signed and Y : unsigned instead with their constraints inferred.
B.vhdl
…
entity B is
port (
X : in unsigned;
Y : out signed
);
end entity;
architecture B_arch of B is
begin
Y <= signed(X);
end architecture;
B compile just fine but I can't seem to figure out how to handle the port mapping in A.
attempts
as is
Of course, it doesn't work because of the types differences.
with casting
from this post
…
architecture A_arch of A is
signal input, interm, output : std_logic_vector(10 downto 0);
begin
B1: entity WORK.B port map (
X => unsigned(input), std_logic_vector(Y) => interm
);
B2: entity WORK.B port map (
X => unsigned(interm), std_logic_vector(Y) => output
);
…
** Warning: A.vhdl(12): (vcom-1191) Type conversion on actual associated with formal > "X" must be a constrained array subtype.
** Error: A.vhdl(12): (vcom-1189) Type conversion on formal "Y" must be a constrained > array subtype.
** Warning: A.vhdl(16): (vcom-1191) Type conversion on actual associated with formal > "X" must be a constrained array subtype.
** Error: A.vhdl(16): (vcom-1189) Type conversion on formal "Y" must be a constrained > array subtype.
AI: Per ieee-std-1076-1993, index ranges must be obtained from type conversion subtype ranges. Said subtype must be constrained (3.2.1.1 clause 415, p. 43):
For an interface object or member of an interface object whose mode is in, inout, or linkage, if the
actual part includes a conversion function or a type conversion, then the result type of that function
or the type mark of the type conversion must be a constrained array subtype, and the index ranges
are obtained from this constrained subtype; otherwise, the index ranges are obtained from the object
or value denoted by the actual designator(s).
(follows the same paragraph about out ports)
Here you type cast with std_logic vector or unsigned, which are unconstrained (i.e. array (... range <>) of ...).
This error is standard behavior.
You must add a constraint to the type conversion you use, like:
architecture a_arch of a is
signal input, interm, output : std_logic_vector(10 downto 0);
subtype u11 is unsigned(10 downto 0);
subtype l11 is std_logic_vector(10 downto 0);
begin
b1: entity work.b port map (
x => u11(input), l11(y) => interm
);
b2: entity work.b port map (
x => u11(interm), l11(y) => output
);
process
begin
input <= (1 => '1', others => '0');
wait;
end process;
end architecture;
This is rarely implemented correctly. For most vendor tools, expect to see crashes, strange error messages, or bad behavior. |
H: Somebody who own MSP432P401R User Guide?
Is there somebody, who can deliver me the user guide for MSP432P401R? I know, this question looks dummy. However the mentioned part lost support from the side of Texas Instruments, and they cannot send me the user guide.
I spent a lot of time finding the document on the web.
I will be so pleased if is here somebody who has saved this document and can share it.
Thank you for your time.
AI: The correct name of this document is "MSP432P4xx SimpleLink™ Microcontrollers Technical Reference Manual". Its identifier is SLAU356.
It is, of course, available on archive.org:
https://web.archive.org/web/20200601174233/http://www.ti.com/lit/ug/slau356i/slau356i.pdf |
H: Does adding a "spare" ribbon cable to a PCB increase trace impedance?
I've read numerous posts advising that trace length must be kept short on a high speed board to reduce the effects of impedance. If my traces are routed through a pin header that has an IDC ribbon cable "to nowhere" does that have an effect to manage too? If so, how do I manage it?
Longer explanation
I want to integrate some high-speed logic with my Raspberry Pi 4 via its GPIO header. I plan to make a PCB containing the logic in question and it will sit atop the Raspberry Pi, using a female socket header that connects to the Pi's (male, 40-pin) GPIO header. My target application speed is 20--25 MHz. I'm using 74LVC-series logic (IC propagation time 3.7--6.5ns, edge rise/fall time <= 2.5ns per datasheet) and is powered from the 3v3 pin of the Raspberry Pi; power consumption should be ~50 mA (well within spec for the Pi 4's regulator). On the Pi side, I would be reading a 16-bit bus of signals through the GPIO header.
I plan to put all this in a case, and I would also like to have the GPIO pins available outside the case so I could attach other devices as needed later. (The logic on my PCB has chip select pins that would set them to high-Z in this case.) So I am planning to add another 40-pin header on the top of the custom PCB - mapped 1:1 to the Pi's - that I could then connect to a ribbon cable that would bring those pins to the case edge.
Here's a side-view / cutaway shot of what I'm planning (not to scale):
When the PCB logic is active, there wouldn't be anything connected to the external GPIO pins; [16 pins on] the ribbon cable would just be implicitly being driven by the PCB's logic as it takes over them.
Does this plan create challenges for the performance of my application? I have a vague idea that the cable constitutes a radiating antenna in this case, but since it's up and away from the board, will that affect the integrity of the signal path and/or add capacitive loading that prevents it from operating at the target speed? (This is a hobby project, not for mass production, so I'm not too worried about radiated EMI -- the device doesn't need to pass an interference certification test.)
And, would it matter if the extra header is "inside" or "outside" the signal path?
Consider the following overhead sketch of the PCB:
If header 1 is to the Pi, and the traces go "through" the pins of header 2 (which connects to the IDC cable) and then to the onboard logic, is that different from the case where the Pi connects to header 2 and the IDC cable is on the "outboard" header 1 connection? (Note that per figure 1, the Pi-side connector will be on the bottom layer of the board, although with THT mounting that is a somewhat moot distinction.)
The system is simple enough that everything could be routed on a 2 layer board. But I am leaning toward a 4-layer board of (signal--ground--power--signal) for speed purposes -- necessary, or overkill here? I am fairly confident the high speed data signals can all be routed on the top layer, the bottom signal plane is only for the underside connector, and infrequently-changing signals like chip-select and other control lines.
Assuming that, yes, the cable is an issue... is there any way I could electrically isolate or decouple the two "sides" of the system using passive components? Preferably without sourcing current unnecessarily from the Pi's GPIO pins? (I cannot use a buffer w/ chip select like 74LVC245 between the ribbon cable and the main connection because each of the Pi's GPIO pins would need to be independently selectable as input or output to retain its original capability.)
I'm new at PCB design and grateful for the advice. Thank you in advance!
AI: For high-speed applications (i.e. IDC length > wavelength/10 or so), this is an open-circuit stub and is harmful to high-speed performance, no matter how good your custom PCB layout may be. Even without considering radiation, signals are reflected off the un-terminated IDC stub and end up back at your main signal line from pi to PCB, but with a delay.
A passive isolator doesn't seem feasible - the exact stub impedance depends on the operating frequency, geometry, manufacturing variations in IDC cable, etc. In order to prevent edges from entering the IDC (with whatever characteristic impedance it has to ground), such a component would prevent signals from entering it, or from returning - this blocks the use of the IDC for communication.
You may want to consider your active buffer approach, but with a different buffer IC that provides bidirectional signal flow and independent enables, and a GPIO expander that allows you to drive those enables from a single I2C or SPI interface. Transmission gate or analog switch ICs could work. |
H: Who gains money when domestic photovoltaic electricity is injected on the grid?
As an equity measure to ensure that everyone pays for the grid use, in multiple places we, the citizens/consumers, don't get refunded when we inject our excess photovoltaic production into the grid.
Now this excess production is available on the grid and someone, for ex. a neighbor, will consume it and get billed for it.
The electricity provider of this neighbor has earned money for some electricity it hasn't produced, nor bought.
Is it correct to say that the provider earns free money in that case ? Or is there another mechanism that takes place ? ex: transmission losses, intervention of grid maintenance company, ...
AI: On a big national grid, with a scattering of home PV systems, the exported electricity ends up lost in the system.
The grid will always have losses, from the transformers and the very long lines. This lost electricity gets factored into the billing rates. Nobody gets specifically billed for it; it's just added to the cost per unit or standing charge.
All the exported PV power covers losses that the suppliers won't have to pay for any more. So either their profits go up a bit, or the price of electricity can be dropped a bit for everybody. |
H: Split current in a circuit?
I am looking to power an Arduino from a 12 V, 5 Ah lead acid battery, hooked up to a solar charge controller. (Max output: 12 V, 20 amps). I am also looking to power a Peltier Device, in order to cool some water. The Peltier device draws 6 amps at 12 V. I am looking for:
A way to limit the current drawn from the Peltier to ~3 amps. After some research, I've found the "best" method is to use a switching-regulator (constant voltage), to reduce the input voltage of the Peltier to be ~6 V, resulting in ~18 watts. (having a current draw from 2-3 amps) constant current may be a better solution, however I already have some buck-converters for constant voltage , so would like to use those if possible.
Ensure that the Arduino will always receive 5 V. I am using a LM7805 voltage regulator for this. The Peltier will draw a variable current, based on its temperature, so I never want the Peltier to use so much power, that the Arduino powers off.
I am still a semi-beginner, so I apologize if my questions don't make much sense.
AI: Your first concern is how much power the Peltier cooler needs. If you give the cooler less power, perhaps it's efficient overall, but it's still going to cool the water more slowly. Is that okay for your project?
Yes, you can use a switching regulator to efficiently convert voltage and/or current. I wonder why you don't buy a cooler that uses the amount of current you want it to use. I guess you already have the cooler and you're going to buy a regulator. I see that an 18W Peltier cooler is slightly more expensive than a 18W switching regulator, but not very much.
An Arduino doesn't use much current so a 7805 linear regulator is fine. Richard Thiessen has already pointed out that many Arduinos have built-in regulators so you might not need a separate one. If so, make sure to feed the power in a way that makes it go through the built-in regulator, so you don't break the Arduino.
You shouldn't have to think about "splitting current." You should add up all the current you need, and make sure your power source can supply at least that much. If it can't supply that much, then get a stronger power source and stop worrying about it.
Sometimes you have to deal with a weak power source and then you have to think about splitting current. But it doesn't sound like that is the case in your project.
I get the impression that you aren't really sure what happens when you draw lots of current from a battery. Basically, it causes extra voltage drop. The battery is labelled as 12V, but with low current draw, it will decrease from about 13V when full, down to about 10.5V when flat (check the datasheet). If lots of current is drawn, this also decreases the voltage a bit (check the datasheet). it will be a bit lower than that.
I guarantee your battery can put out 10 amps if you want it to, although it will go flat very quickly. Theoretically half an hour; in practice, if you discharge a battery quickly, you can't get the whole capacity, so it could be more like 15 minutes. It can probably even put out 20 amps for 5 minutes. Or 100 amps for a very short time. I'm lowballing my guesses. The datasheet for the battery should include a chart with the real numbers for your battery.
Note that if you continue draining the battery past the point of being flat, you will ruin it. You should add a way for the Arduino to detect the voltage, so it can turn off the cooler if the voltage is too low. It is possible that the solar charge controller already does this.
And I guarantee that even 20 amps being drawn from a nearly empty battery the voltage will be higher than 5V. For a 12V battery to drop down to 5V is some extreme drop. Probably the only way it can get that low is if you ruin it. No need to worry about the Arduino not getting enough voltage. |
H: Selecting transistor base resistor for LED switching application based on datasheet
I have been trying to understand how to read a transistor datasheet properly, and to apply the ratings into my circuit to be able to calculate the needed resistor on the base leg.
I have a basic setup where my collector pin contains a 12V/3A LED supplied by a 12V power supply.
I am trying to control the power with a Raspberry Pi 4 Model b from the base leg.
What is needed to be known is that Raspberry's GPIO pin (3.3V logic) can handle 16mA at most, therefore, I employed a Darlington NPN transistor - BDX53C - to safely dim/amplify the LED's brightness with a PWM control.
I tried to get the basics of calculating the right amount of resistor to be able to safely control the environment without harming either the transistor or the Pi. I saw in this specific discussion and other various similar topics on the internet that people either calculate the potential difference in the base leg as 0.7V if it is a regular NPN or PNP transistor or 1.3 ~ 1.5V if it is a Darlington transistor.
Somehow, I couldn't find those values anywhere neither in my transistor datasheet nor in others. So my questions would be:
Is the \$h_{FE}\$ value only needed to calculate the right resistor value in a setup? If not, are 0.7V and 1.5V potential differences nothing but a myth? Which variables represent those values in a transistor datasheet?
What do the \$V_{CE(SAT)}\$ and \$V_{BE(SAT)}\$ variables represent in a transistor setup? Additionally, how can I find more specific values of them if they are needed, for instance, \$I_{C}=3A\$ but \$I_{B}=8mA\$?
How can I assure myself that a transistor is suitable for my setup? For instance, how can I determine whether a BDX53C would properly yield qualified performance considering 12V/3A LED PWM control?
I observed that people use the transistors in their setup as a switch most of the time. Therefore, they make a certain calculation about the base resistors. Do I need to take something extra into account as I would want to use the power between 60-100% at different times using a PWM configuration in my Pi?
AI: For PWM the transistor is going to be used as a switch, it's either in saturation or cutoff. In the datasheet saturation specifications they are showing the base current as being 1/250th of the collector current, so 12 mA for an IC of 3 A.
Looking at this graph:
that puts Vbe(sat) at just over 2 V, so subtract that from the voltage out of your Pi and divide by 12 mA to get the resistance for the base resistor.
From this graph:
Vce(sat) is about 1.3 V, so subtract that from the supply voltage to get the voltage available to the LED. |
H: How can I find voltage of capacitor in this question?
The question asks us to find the energy that is stored in capacitors C1 and C2. Firstly, I drawn the equivalent circuit and then determined currents.
i1=[3k/3k+(2k+4k)]*6 mA = 2 mA ; i2=6-i1= 4 mA
For finding W1, we should find V1. So, my lecturer made this operation:
V1= 4k * i1 = 8V
He equalized the voltage difference between terminals of 4k resistor with C1. But doesn't the voltage difference between the terminals of 4k resistor equal to 5k resistor and C1? Could you help me please?
AI: There is no current flowing through the 5k resistor, so the voltage across it is zero.
But doesn't the voltage difference between the terminals of 4k resistor equal to 5k resistor and C1?
This is essentially correct, but the 5k resistor has 0 volts across it, so in practice the voltage difference between the terminals of 4k resistor equal to C1.
$$V_{4k} = V_{5k} + V_C$$
$$V_{4k} = 0 + V_C$$
$$V_{4k} = V_C$$
The reason why there is no current through 5k resistor is, that the C1 acts like an open circuit for DC voltage. When nothing can flow through C1, nothing can flow through resistor that is in series with it. |
H: Why is there a MOSFET here?
On the TI DRV8703-Q1 H-bridge, there's a MODE pin to select the control interface.
What's the purpose of the MOSFET highlighted in yellow?
If it's for overvoltage protection, why not a Schottky diode?
You have my thanks
AI: Perhaps a better answer is to compare using diodes for low currents in this application with a 3V3/100k load with a BJT diode and or a diode. The flat line of switching loads < 20 uA of the FET tells you why this is good for OFFSET with variable low currents <= the current rated for Vt aka Vgs(th). It is a crude but fairly stable reference for this purpose and cheaper than a bandgap reference diode to make.
Below I use a clock and ideal analog switch to show the difference in voltage for load = 10M and 100k. Ideally, the voltage drop on the FET is constant and equals the Vgs(th), here = 1.5V.
The Nh FET has some known Vt=Vgs(th) @ 50 uA or similar with some tolerance voltage range that produces a constant offset voltage below DVDD=3.3 +0.2/-0.3. Thus this is sensitive to actual Vt and near rated currents for Vgs(the) in fact Vds=Vt , like a Zener at very low currents. but with wider tolerances, which is common for FETs Vgs(th) @ Id=x uA.
I expect the mean of the "window detector" defined as; . 0.75 V and < 1.35 V =(0.75+1.35)/2=1.05 V
Thus valid logic levels with hysteresis may be detected or the mid-range for some purpose. ( I admit not to reading the specs.)
Why do they use this method to make a 2.23V Zener from Vgs(th)?
Perhaps, it was convenient to use the same CMOS logic used elsewhere for an analog purpose and thus almost free with suitable tolerances to provide hysteresis for 3.3V analog CMOS comparators. They can also detect 3 logic states at the input. <0.75, >1.35 and 0.75< x < 1.35.
Reverse engineer this design as follows;
The two resistors give a ratio of about 71% which means Vt may be estimated with a mean value of two comparators of 1.05V. The Vds drop = Vt from 3.3 should drop to 1.05V. Using KVL, 1.05 V = 71%( 3.3 - Vt) thus Vt = 2.23 V nom.
It is quite possible for thermal compensation that the comparators use a Vref using the same FET OFFSET voltage method and also provides reverse polarity protection. |
H: Push-Pull voltage-to-current converter
How to make the following voltage-to-current converter bidirectional, so that it maintains the requested current through the load, even when that load stores energy and attempts to oppose the reduction of the current flowing through it?
AI: You don't mean to make it bidirectional.
You mean that it should maintain the current independent of the polarity of the voltage across the load.
If the load is storing energy the voltage will be such that the drain of Q1 is lower in voltage than the supply. That is the drain is negative relative the the supply.
If the load is sourcing energy the voltage across the load will be of the opposite polarity such that the voltage at the drain is higher than the 30V supply. That is the drain is positive relative the the supply.
The circuit shown can already do that provided the voltage and power rating of the MOSFET is adequate. |
H: How does exactly this DAC work?
So here it is:
I don’t know how exactly this circuits works. I don’t understand how that feedback loop sets up an accurate know current source biasing, and what transistor is it trying to bias, Q4? For what?
With accurate current matching I presume it means that currents stay the same no matter what the voltage Vout is, but exactly what currents and why does this circuit need this? How does putting Vbias in one of the transistors in the pair differential helps this? I don’t understand.
Lets say I have code 10 as my input, so I presume d1 = 1 (or some other voltage) and d2 = 0. What happens next?
AI: This architecture is based upon a simple current steering dac.
Although, usually it allows for differential input
signal control words (\$d_{1}\$,\$d_{1N}\$,\$d_{2}\$,\$d_{2N}\$,...\$d_{k}\$,\$d_{kN}\$) and a differential load (\$R_{L}\$ on each leg).
"I dont know how exactly this circuits works. I dont understand how that feedback loop sets up an accurate know current source biasing, and what transistor is it trying to bias, Q4? For what?"
An ideal current steering dac will have perfect current sources that add (across a resistive load) to create a voltage output
proportional to switches turned on or off. So you would like close to ideal
current mirroring under all conditions. The current source on the left mirror leg provides
feedback and a high impedance cascode to help acheive this. It can usually
be set by some Rref and vref that will fix the current of the mirror source and
provide robustness against variations. The loads of the mirror are each
of the current steering blocks. Forcing one to be fixed with vbias will
help create ideal loads for the mirror and improve matching (rather than
have the loads vary with differential inputs).
The switches \$d_{1}\$,\$d_{2}\$, to \$d_{k}\$ will simply steer current proportional to the
input codes and sum across the output \$R_L\$=50 \$ohm\$. So for example,
if you had \$d_{1}\$,\$d_{0}\$ = 1,0 you would get half \$I_{1}\$ plus a full \$I_{0}\$ or
\$Vout= \$(\$I_{1}*\frac{1}{2} + I_{0}\$*1)\$*50ohm\$.
You might have an easier time starting with understanding
a basic fully differential current steering dac.
edit* per request in comment, here is a good tutorial from Behzad Razavi. Note he doesn't show much in the way of current mirror architectures, but you can look up robust current mirror design for that.
edit2* I built an lstpice simulation to help you further. It uses binary weighted current cells and nmos inputs. It should be fairly simple to translate to thermometer (X1 for each current cell weight). You can just flip upsidown and use PMOS to match book. |
H: MX-FS-03V 433 MHz transmitter will it function with a coil missing?
I'm planning to make a project using the MX-FS-03V 433 MHz transmitter and the corresponding MX-05 receiver with a couple of Arduinos for short range radio communication. Range will be at most 10 meters.
I got a couple of these from Amazon, but one thing has me puzzled. This is a picture of the transmitter:
note that in the lower right corner are three unoccupied through holes. The extreme lower right one is where I will solder the antenna, but the other two have me puzzled. Looking at other pictures of this transmitter, there seem to be several revisions if the silk-screen on the PCB is anything to go by, and some have a second coil soldered between the two holes not used by the antenna, e.g. this one:
So what's going on here? I'm not that much of an expert, but I know enough to believe that removing a coil from an RF circuit is likely to have some pretty major effects. For one project, it is vital that I work at 433 MHz, so I'd like to have some idea of what I'm getting myself into.
AI: They can't work without a vital component in place. That inductor provides power to the high-frequency oscillator.
Luckily somebody else has gone to the effort of analyzing the issue.
Review this website.
FS1000A 433 MHZ TRANSMITTER – L2 MISSING
Here is the schematic from that site. |
H: How do I remove bathroom scale auto shutoff feature?
I’m a software guy so please excuse any silliness in this question.
I need a scale which will not shut off on its own and continuously show changing weight values.
Although I may have no choice but to learn about load cells and arduino/raspberry interfaces, I figured I’d start with cheap digital bathroom scales.
Thr bathroom scale I bought shuts off after some time of inactivity. Worse, even if I’m standing on it, after a few seconds it locks in the weight.
Back side and the lcd:
As far as I can tell, they use a custom chip/board. I was hoping it would be a standard hx711 with the shutoff feature bolted on, which I could somehow disable.
Is there anything I can do to get the scale to continuously read values, until I turn it off?
AI: Almost certainly you have no easy access to what would be needed to make that change.
There will be an MCU, maybe hidden under a blob of black epoxy close to the LCD display (because of all the required connections). The MCU may be locked (firmware cannot be read, even with the proper equipment, because the security bits are set) and quite possibly OTP (one-time programmable), and likely of an random type from an obscure manufacturer eg. like this one (and would likely require an emulator type setup to debug).
Note also that if you keep it 'on' it will drain the batteries in short order, mostly because of the current draw from the load cells. It also goes through an auto-tare routine at power up that deals with the drift in the load cells (and that means that load cell "creep" is not an issue).
Chances are both the MCU and the HX711 are powered continuously and the MCU issues a shutdown instruction to the HX711, which puts it into a low power state. Similarly the MCU would go into a low power mode and blank the display while waiting to be triggered.
Edit:
Based on the additional photo it looks like they've integrated the HX711 function into the MCU U2 (they're always innovating to trim costs in this kind of mass market). If it's a similar circuit, the SOT23 part will likely be marked "2TY". Maybe you can find the part number on the MCU. The programming would be done via the test pads on the PCB, including the Vpp pad. |
H: Question regarding the ADC MCP3201 interfacing with PIC16F72 microcontroller
I provide the 1.2 MHz clock to SCK pin to the ADC from PIC clock pin, but how I can receive the data from MCP3201 ADC, which provides a 12 bit binary data on SPI pin SDI of PIC? Buffer of PIC is 8 bit so how I can receive the extra 4 bits from ADC?
I will share you my C code. Please check and suggest me where I can modify the code.
#include <xc.h>
#include <PIC16F72.h>
#include "config.h"
#define _XTAL_FREQ 20000000
#define SS RA5
/***************INITIALIZING LCD MODULE**************************/
#define Data_PORT PORTB
#define Enable_pin RC6
#define RS RC7
#define CTRL_PORT_DIR TRISC
#define DATA_PORT_DIR TRISB
int LSB=0,MSB=0,FINAL_DATA=0;
/*****************************************************************/
void lcd_init(void);
void LCD_command(unsigned char command);
void LCD_data(unsigned char data);
void WriteStringToLCD(const char *s);
/**********************initiAslizing SPI*****************************/
void SPI_Initialize_Master();
unsigned SPI_Ready2Read();
char SPI_Read();
/****************************************************************/
int VALUE;
void main(void)
{
SPI_Initialize_Master();
__delay_ms(150);
LCD_command(0x01);
lcd_init();
const char msg[] = "ADC_VALUE=";
char a= 0x30;
LCD_command(0x80);
WriteStringToLCD(msg);
while(1)
{
VALUE = ADC_READ();
LCD_command(0x8A);
LCD_data(VALUE);
__delay_ms(500);
}
}
void lcd_init(void)
{
CTRL_PORT_DIR = 0x00; /* Direction of control port as Output */
DATA_PORT_DIR = 0x00; /* Direction of data port as Output */
LCD_command(0x38); /* LCD command - 5x7 matrix */
LCD_command(0x80); /* LCD command - Force cursor to the bigining of first line */
LCD_command(0x3C); // activate 2nd line
LCD_command(0x0C); /* LCD command - Display ON, cursor OFF */
LCD_command(0x01); /* LCD command - Clear display */
}
void LCD_command(unsigned char command)
{
RS=0;
Data_PORT=command;
Enable_pin=1;
__delay_ms(5);
Enable_pin=0;
}
void LCD_data(unsigned char data)
{
RS=1;
Data_PORT=data;
Enable_pin=1;
__delay_ms(5);
Enable_pin=0;
}
void WriteStringToLCD(const char *s)
{
while(*s)
{
LCD_data(*s++); // print first character on LCD
}
}
void SPI_Initialize_Master()
{
TRISC5 = 0; // SDO pin set as output pin for data out
TRISC4 = 1; // SDI pin set as data in pin s
TRISC3 = 0; // SCK pin Set as output
TRISA5 = 0; // SS pin set as output pin for slave select
TRISB = 0x00;
SSPSTAT = 0b00100000; // data transmittend on falling edge of clock cycle
SSPCON = 0b00100001; //pg 75/234 idle state for clock is low level & fosc/16
SS = 1;
}
int ADC_READ(void)
{
SS = 0;
SSPBUF = 0x01; // Initiate SPI bus cycle
SSPSTATbits.BF = 0; // CLEAR THE BUFFER BIT
while(!SSPSTATbits.BF);
__delay_ms(10);
MSB = SSPBUF;
__delay_ms(10);
SSPBUF = 0x81; // Initiate SPI bus cycle
SSPSTATbits.BF = 0; // CLEAR THE BUFFER BIT
while(!SSPSTATbits.BF);
LSB = SSPBUF;
__delay_ms(10);
SS = 1;
LSB=(LSB>>1);
MSB=(MSB<<7);
FINAL_DATA = MSB+LSB;
return(FINAL_DATA);
}
AI: You need to read the datasheet! It answers your question exactly. Here is a direct quote:
With most microcontroller SPI ports, it is required to clock out eight
bits at a time. If this is the case, it will be necessary to provide
more clocks than are required for the MCP3201. As an example, Figure
6-1 and Figure 6-2 show how the MCP3201 device can be interfaced to a
microcontroller with a standard SPI port.
Datasheet: http://ww1.microchip.com/downloads/en/devicedoc/21290f.pdf
There's more information in there on the subject that I have not quoted. |
H: Arduino pin state (HIGH and LOW voltage range)
When a pin is configured as INPUT and read with digitalRead, the HIGH state refers to a voltage >=3V, while a LOW state refers to a voltage =< 1.5V:
https://docs.particle.io/cards/firmware/language-syntax/variables/#:~:text=When%20a%20pin%20is%20configured,is%20present%20at%20the%20pin.&text=When%20a%20pin%20is%20configured%20to%20OUTPUT%20with%20pinMode%20%2C%20and,pin%20is%20at%203.3%20volts.
Why is the "floating/undefined state" voltage range 1.5V by design, appears to be rather large...
AI: It is simply difficult/expensive to manufacture parts with a very specific voltage limit, and the value of doing so is in reality very low since every digital output will have a similar behavior.
In reality, the inputs will read as either high or low and nothing else, the specification is simply what voltage is needed to guarantee that under all circumstances on all inputs on every device, the input will read as the state you intend.
Probably, most devices will actually switch somewhere in the middle of the 1.5 to 3 V span, but you won't know exactly when and it will vary with temperature, individual etc.
If you need to switch at a specific voltage, go for an analog comparator based design. |
H: Mystery small, electric motor: shaft held stationary while motor spins around it
I want to wire this motor to a 3-pin plug. The colors are not standard. Which of these wires is neutral, active and earth.
It would be of interest to know what this kind of motor is called?
I was recently given this small electric motor (approximately 200mm long x 60mm diameter). The shaft is mounted stationary to a framework. The motor housing spins freely around the fixed shaft (c.f. pulley attached to housing). It has purple, brown and blue wires. I have never seen a motor that operates this way and haven't found description of or name of this type of motor. I’m trying to figure out what it is and how to wire it to make it work.
Edit: Australian colour codes would suggest that brown is active, blue is neutral, leaving purple as earth, but when I tried this combination, it threw the breaker switch.
AI: As others said, the motor needs a 4uF run/start capacitor, and yes, you’ll have to exercise the centrifugal switch to make sure it works.
The grounding is done via the frame.
So you’ll need a standard power cord (eg Schuko to flying wires), then connect green-yellow to the metal frame, and blue+brown to the motor and capacitor. The capacitor connection depends on how the centrifugal switch is wired. And to figure that you’ll need a multimeter.
The motor is from an appliance like a washer or a dryer. It originally drove a V-belt – see the pulley at the end of the housing. |
H: Current distribution in a circuit that includes current and voltage sources
The question asks us to find \$ V_{c1} \$ and \$ V_{c2} \$ .
The equivalent circuit is below. My lecturer drew the currents as in the picture.
And here is my lecturer's solution for calculating the currents:
The sum of currents is 3 A. So, it seems like the voltage source does not produce any current. Is that true?
AI: So, it seems like the voltage source does not produce any current. Is that true?
Assuming the calculations are correct: it doesn’t have to produce current.
When you disconnect the voltage source, the current source develops exactly 9V on the terminals where the voltage source was connected. As you reconnect the voltage source, there’s no potential difference and thus no further current flows from it.
This “disconnection” operation is formalized as the superposition method. You can analyze a linear circuit with only one independent source connected at a time, repeat the analysis for each source, and then just linearly add up the partial solutions for voltages and currents at each node/component. This doesn’t generally work if the circuit is nonlinear, ie. if it has dependent sources like transistors or op-amps. In specific cases it may work, but that’s the exception not the rule.
The problem circuit was designed specifically to illustrate such a situation where the voltage source produces no current. Such a perfect match is rare in practical circuits, unless they were designed to make it happen. |
H: wire rotary switch with leds that turn one at a time
I have a 9V battery, a rotary switch (one pole, 5 positions), and yellow LEDs (2V drop). How can I wire it so that as I rotate the switch the LEDs turn and stay on until I reach the end of the switch?
Meaning as I turn the switch first I get one led on, then two, three, and so on
I'm a beginner trying to wire a very simple toy for my niece.
So far I tried wiring it so that I connect + to led then led to next led, each one to its switch, and then ground from the switch back to the battery. But had no luck.
I'm not too concerned about some loss in light intensity since this is a homemade toy.
Thank you!
AI: You can do it as following (example for 3 position switch, the arrow is indicating the switch, because the schematic tool doesn't seem to have the component).
Then it is in R1 position, the closed circuit is R1+D1 in series. When in R2 position, then you have R2+D2+D1 in series. The last one is R3+D3+D2+D1. You will have to calculate the proper R1, R2 and R3 values to get a relatively even light intensity in each position (R3 < R2 < R1).
simulate this circuit – Schematic created using CircuitLab |
H: Does IRLZ44N need heatsink when used as a switch
I'm doing some modifications to my boombox and need help with IRLZ44N.
I don't have any good switches left but I have couple IRLZ44N and I've used them in car but never needed this much power, peak power being 200W no matter the Voltage(11.2-16.8V) so I have to draw 12-18A and I have no idea if I need a heatsink or how to calculate it, no need to spoon-feed me but some help would be nice, Thanks.
AI: Your circuit, at nominal input, will exceed the absolute maximum Vgs rating of the transistor. If you add an 8V or 9V zener diode you can prevent that.
With 5V or more drive the maximum Rds(on) is 0.028\$\Omega\$, at Tj = 25°C. So dissipation would be 18 * 18 * 0.028 = 9W. We can add perhaps 50% to that since it's going to get very, very hot, so more like 14W (See Fig. 4 in the datasheet) that's assuming you can keep the junction temperature to 100°C or less). That's a large heat sink, maybe a fan too, depending on your maximum ambient and how reliable you want it to be.
Without a heat sink it will die in seconds, if it doesn't unsolder itself first.
I would suggest finding a better MOSFET. |
H: Calculate coil impedance, inductance and resistance
The voltage over a coil is u(t)=100⋅sin(100t+pi/3) V and current through it i(t)=20⋅sin(100t).
First off let's calculate the impedance, I'm quite sure I'm correct on this one.
This is where I get stuck because now I need to calculate resistance of the coil and inductance.
I know that Z=jwL, w=100 (from the sine functions) and then I can get L to be Z/jw. The problem here is that Z has an angle. I cannot figure out how to get over that problem. I'm also confused about calculating the coil's resistance, because the impedance is in ohms. Isn't that the resistance then?
AI: Having Z on rectangular form. The real resistance is the real part of Z=2.5+j*4.33. Therefore R=2.5 Ohm.
Reactance is the imaginary part and has the formula: |
H: Embedded firmware and app?
I am in Hardware Design trying to get into Embedded Systems, so please pardon me for lame questions.
So,
Scenario 1 : I upload an Arduino Sketch into Arduino with the help of Arduino Bootloader .
Scenario 2 : I develop a PCB on which first firmware is flashed and then a Linux based app is installed.
Questions :
1. Can Arduino Bootloader be called as the firmware?
2. Can Arduino Sketch be equated to an app?
3. What language is the bootloader programmed in? Is it C?
4. What is the common programming language for firmware?
5. Do Embedded Systems Engineers need to learn app development (Linux /Android /whatever) too?
Or just efficiency in the firmware part(C I guess) is enough?
Thanks in advance!
AI: I usually refer to the entire software on a small MCU as firmware. So in this case, the combination of bootloader and application is the firmware.
Sure.
The Arduino bootloader is indeed written in C.
There are more than one common language. But C is the most common.
Depends on what you want to work with. In a small team, developing some Linux based hardware, it is likely that you will also get involved in writing apps for said hardware. |
H: Load testing using a resistor bank
I have a bespoke 48V down converter that I'm looking to test which should be able to provide a maximum of 500W of power. I'm looking to test the device under various loads - from 50W to 500W.
I'm trying to do this on a budget - and already have some 4.7ohm 1kW rated chassis mount resistors. Since one resistor will only ever have a maximum of 500W going through it I presume these should be fine (perhaps a fan blowing over their surface to help dissipate heat may help). I will be connecting these resistors in series/parallel to achieve various resistive values to change the load being tested - thinking of using 12AWG cable for this. Max current draw will be approx. 10A so this should be fairly safe?
Any suggestions greatly appreciated.
AI: One 4,7 Ohm resistor connected to 48 V burns up 490 W.
If you connect two of them in series to 48 V, each one spends 122.5 W and both together 245 W.
10 of these resistors in series result in 49 W and only 4.9 W for each.
For a 490 W load you may use a circuit of 4 resistors, two parallel strings of two resistors in series. Each resistor spends only 122.5 W.
So if you avoid to use a single resistor at 48 V and 10.2 A and 490 W, the resistors spend only a small fraction of the rated 1 kW. You are well below limits and the resistors should not get too hot. |
H: Parallel RC Circuit Time Constant
For a general RC circuit that looks like the following:
(Sorry for the rough sketch of the circuit, I couldn't find a picture online)
What would the time constant or RC equations be?
My thinking:
The time constant would be R2C.
I drew a large loop around the circuit (contains battery, R2 and C): So from KVL I have : V - R2 I2 - Q(t)/C = 0 and solving the differential equation, I get that Q(t) = VC(1-e(-t/R2C)).
I'm not sure I'm right so please correct me if I'm wrong.
AI: Correct. The parallel resistor R1 has no effect if the components are ideal. If you are using a more realistic model of a battery as an ideal voltage source with some finite internal resistance then it would come into play.
Edit: however if you were to remove the battery and observe the time constant as the capacitor discharged through R1 and R2 the time constant would be different. |
H: Add DC component to signal
I want to add a DC component to a pulsed signal to prevent it from taking negative values. For this, I connect the pulse signal to a capacitor and a voltage divider. With the capacitor we eliminate the continuous component that the signal could have and it is centered at zero. So, with the voltage divider we define the DC component that we want to add to our signal. In ours, we would need to add 1.75 V, since the peak-peak voltage is 3.5 V.
However, when simulating the circuit we do not obtain a pulse signal that is worth 0 V or 3.5 V, but rather a signal with a kind of voltage peaks on the falling and rising edges of the signal. I don't understand it...can anyone tell me why this is happening?
UPDATE
The pulse signal would be the output of the comparator in the following circuit:
Expansion of the "comparator-add DC" circuit:
AI: The resistor values are much too low.
The resistors and capacitor create what is known as a RC time constant. When the input level changes the capacitor will gradually charge to this new value but you must select the values so that it charges slowly enough not to distort the AC signal.
This means that the product of the effective resistor value and the value of the capacitor must be much greater than the period of the input signal.
The effective resistor value for the circuit you have is equal to both resistors in parallel. For your use that should be high enough so as not to load the input excessively - as others have suggested a value of 10k for each resistor is reasonable for many applications and would provide an effective resistance of 5k. The values you currently have give a very heavy load of 5 ohms.
You have set the period of each half of the signal set to 10ms so the capacitor must not charge significantly within that time. We will select a time of ten times that, ie 100ms.
Using the effective resistance value of 5k we need a capacitor value of 20uF to meet the 100ms time constant. 5k * 20uF = 100ms.
The values you have in your circuit have a time constant of only 5 microseconds giving the very short pulses you are seeing.
Update
With the additional information provided in the question a better solution is just to used a couple of resistors and a diode to provide the correct levels for the input of the following circuit.
This has the advantage of operation down to DC.
The 4.7k and 10k resistors shift the output of the comparator from the -2.5V to approximately ground when conducting. The 10k resistor pulls it up to 5V when the comparator is not conducting. The schottky diode prevents the output voltage ever going negative by more than a few tenths of a volt. It is not absolutely necessary as the Arduino following can tolerate a negative voltage provided it is very low current.
simulate this circuit – Schematic created using CircuitLab |
H: Calculating Fuse Panel Wattage
I am trying to figure out how many watts my house is drawing. I used a clamp-on meter to measure the current on the hot leg of my fuse panel which showed 128.8A. I am then assuming 120V which gives me 15.456kW. However, my electric rate is 0.086/kWh meaning I would have to pay $1.32 which I know is not correct.
What step am I missing here?
AI: There are a few different pieces to this puzzle, and unfortunately an instantaneous read will not give you the full answer.
120V vs. 240V
Most homes in the US have a 240V feed on two hot wires, plus a neutral that is right in the middle. That gives you 120V for most things (most lighting plus most receptacles around the house for small appliances, computers, TV, chargers, lights, etc.) and 240V for a few big things (clothes dryer, HVAC, water heater, oven, cooktop, EV charging). There are some variants (e.g., a gas dryer or cooktop will use 120V, a gas water heater might not use any electricity at all, window air conditioners are often 120V). But the basic point is that you really have 240V power and you need to measure both legs because, thanks to the 120V usage, each leg may me using a different amount of power at a particular point it time. If they were the same (all 240V appliances, no 120V usage or exactly balanced between the two legs) then you could measure one hot wire and multiply by 240. Instead you measure both hot wires and multiply each one by 120.
Demand vs. Usage
This can be huge. At any point you get a current reading, that tells you right now how much power you are using. But it doesn't tell you how much you will be using tomorrow or even 5 minutes from now. Demand changes every time you turn on a light, plug in a charger, turn on a toaster or use any other electric appliance. Plus demand can change even when you do nothing at all - HVAC, refrigerator, water heater, etc. cycle on/off using a thermostat even if you aren't in the house.
A smart meter (which your utility may have already installed, but which you likely can't read directly, unfortunately) will actually monitor voltage, current and many other parameters all the time and constantly calculate both the demand and usage. Typical meters will record demand and usage every 15 minutes. Actually for a reasonable approximation getting average demand every 15 minutes or total usage every 15 minutes will give you all you need to know. Demand is measured in kW (essentially Volts x Amps, except for power factor...) and usage is measured in kWh (essentially Volts x Amps x hours, except for power factor...). In other words, if you take kW average (not peak) demand for a 15 minute period and divide by 4 you get the number of kWh used in that 15 minute period. Add up all those 15 minute periods and you have your usage for the day/month/year/etc. Multiply the total usage (in kWh) by $0.086 and you have your total cost for the day/month/year/etc.
The bottom line is that an instantaneous read is very useful for certain things but is useless for determining your electric bill. You need either a meter that can read all the time and remembers the information (a.k.a., a smart meter), or stand there for days on end, writing down the demand every 15 minutes or so (not recommended).
Unfortunately, due to safety/code issues (that are quite valid), residential smart meters are not a cheap thing at this time. Or rather, the utilities are putting them in at a good price, but you can't do things exactly the way they do things (for a bunch of reasons). Depending on your actual needs (determine whether the utility is billing correctly? manage your electricity use?) there may be other solutions. |
H: Difference AC signal and small signal / DC Signal and Operation Point
What is exactly the difference between an ac signal and a small signal? Or analogue between a dc signal and operation point.
I've understood ac signal as a signal with a frequency > 0 Hz and dc as a signal with a frequency of 0 Hz but after research (as well here) I'm still confused. I even found definitions that describe DC only as a uni polar signal and ac as only polarity changing (regardless of frequency perspective). It looks like the exact definition depends on the context.
AI: An AC signal is any signal which is alternating around zero. A DC signal is a signal who is not AC, whose flow is unidirectional. An operating point represents the solution to a circuit at a given time, represented in the voltages and the currents of the consituent elements. A small signal is a concept in analyzing a circuit's AC response by linearizing the circuit around its operating point and determining the response of the system to an infinitesimal AC signal, as if it were linear; it measures the dynamics of the system through simplification.
A 1 Vpk sine with a 2 Vdc offset would be a DC signal, overall, since the alternating part does not go below zero (e.g. the current flows in one direction, at all times). The same 1 Vpk signal with 0.5 Vdc offset will be an AC signal because there is a time when the direction of the current in inversed.
Take, for example, the following circuit:
V(in) is an AC signal because it alternates around zero, even if less negative than positive. V(out) is a DC signal because at no times it goes below zero. The green numbers are the operating point as calculated by the solver (the floating one is the current through the diode).
And this is the small signal response of the system. Notice that the source now has a parameter, {DC}, representing the DC voltage at time=0 for V1. This is used with a .STEP command that alternates its value between 0 V (blue trace) and 1 V (red). Ignore the green labels for now, they can only display one value at a time, and the one that you see is the operating point calculated for DC=0; for DC=1 you would have seen the same values as in the first picture.
What you see are two traces, which represent the response of the system, linearized at the calculated operating point. The blue trace is for DC=0, which means the diode is not polarized, therefore not conducting. As a consequence, its resistance is very large and, combined with the 1 kΩ load, it acts as a resistive divider causing an ≈-85 dB response.
With DC=1, its resistance is minimal and acts as a short, therefore the response (red trace) is that of a 1st order circuit. You can also see that the phase starts rising towards the end, hinting of a zero nearby, but off scale: that's the junction capacitance.
So, for both cases, the response was given by replacing the diode (a nonlinear element) with its equivalent linear representation, passive elements and linear sources (controlled or not). And it represents the dynamics of the system because it uses a small signal around an operating point on the slope of the transfer function, or a derivative, and derivatives are used to measure dynamics.
It's for this reason that opamp analyses showing 100+ dB response look very unrealistic, until you realize that it's not a large signal analysis, it's just a response meant to be analyzed as the dynamics of the system -- or how it would respond if it were linear around a certain point. |
H: Minimum FPGA clock frequency
I currently work with two FPGAs, Microchip/Microsemi ProASIC3E and AMD/Xilinx Zynq-7020.
In their datasheets, the recommended minimum operating frequency is 1.5 MHz for the A3PE ProASIC3E chip. The minimum frequency for the Zynq chip is not mentioned.
While I can understand the limit of maximum speed the FPGA can clock at, as there are timing issues and stray capacitance issues at overclocked speeds, I do not understand the requirement for minimum clock speed for FPGAs.
Would such FPGAs work if they are clocked at, say, a few hundred kilohertz?
AI: In their datasheets, the recommended minimum operating frequency is 1.5MHz for the A3PE ProASIC3E chip.
Actually, it's fIN_CCC that's specified as 1.5 MHz min in the datasheet for ProASIC3E Flash Family FPGAs.
That's the Clock Conditioning Circuit (CCC) input frequency. A ProASIC3E CCC contains a PLL and other clock circuitry. PLLs have a frequency range that they can lock to, which here is 1.5 MHz to 350 MHz.
The standard clocked logic circuits within FPGAs/CPLDs are registers (flip-flops/latches) and block RAM. There's no IC minimum input frequency for registers, just a rise/fall time requirement. Block RAMs typically don't have a minimum either.
Some FPGAs contain 'hard IP' blocks: dedicated circuitry rather than programmable logic. Some of these, such as internal ADCs, have required clock frequency ranges. Others, such as SPI or some CPUs, again have no minimum clock frequency. |
H: Deriving TVOC value in ppb/ppm units
Our project requires a Total VOC value in ppb or ppm units. But the sensors on the market give relative resistance change. How can I derive the TVOC ppb/ppm value? Or I don't know if there is such a sensor on the market?
No need for specific gas detection or ppb/ppm value for it.
Example VOC sensor: https://cdn-shop.adafruit.com/product-files/3199/MiCS-5524.pdf
NOTE: The challenge can be here that since I do not know the exact gas in the environment, I cannot convert the resistance to ppm by solving the linear equation.
SENSOR CHARACTERISTICS
AI: MOX gas sensors don't measure TVOC directly; but the digital sensors which do on-board processing can give you an estimate based on whatever proprietary algorithms they've developed. Have a look at these:
BME680 / BME688
ENS160
SGP30
CCS811
That list will become outdated, but in general all digital gas sensors give you some sort of TVOC estimate because air conditioning is one of the main applications.
The estimate may not be super accurate though, especially outside of a typical office/residential environment.
To do the same thing with an analog-output gas sensor, you would need to develop your own model that maps sensor resistance to TVOC.
This is not trivial. First you need to compensate for a lot of things:
temperature
humidity
drift (sensor responses to a given air mixture are not necessarily stable over time, and depend on the age of the sensor as well as previous exposures)
Then you need to translate the sensor signal to a measure of TVOC. If you don't know which gases are present in the environment, this is indeed impossible, unfortunately.
What the digital sensors do is make an educated guess about the gasses the sensor is likely to encounter in its target application. Typically, the manufacturer would collect some ground truth data in a controlled environment (precisely-controlled air mixtures, spectrometers, etc.), train a machine learning model to predict the true TVOC from the sensor resistance, and store that model in the sensor firmware. The prediction may or may not be good enough in your particular case -- you would have to make some tests to find out.
In general, though, sensors that combine multiple gas sensor types with different sensitivity curves (eg. ENS160) will give you a more accurate estimate, because they can to some extend disambiguate the mix of gasses in the environment. |
H: Connecting analog and digital ground planes back to power supply ground
Reference
I refer to the article "Grounding in mixed-signal systems demystified, Part 2”, written by TI engineers Sanjay Pithadia and Shridhar More. Link here: https://www.ti.com/lit/an/slyt512/slyt512.pdf
On page 7, paragraph 2, it says:
“The AGND and DGND pins should be connected to each other and to the analog ground plane, and the analog and digital ground planes should be connected individually back to the power supply. The power should enter the board in the digital partition and be fed directly to the digital circuitry, then filtered or regulated to feed the analog circuitry. Then only the digital ground plane should be connected back to the power supply.”
The last sentence seems to contradict the first sentence. I have highlighted the two in bold.
Question 1
So, should digital ground plane be the ONLY one connected back to PSU ground, or should I connect both to PSU ground?
Question 2
Should the analog LDO (i.e. A_VDD power supply) be placed over the digital ground plane area? Over the analog ground plane area? Or neither, and with a trace from one of the above, but with no ground pour below?
NOTE
My question relates to a single 2 sided PCB with 2x ADC chips on the top layer. Incidentally they are the MCP3561R sigma-delta ADCs. I am aware this section of the article is not exactly the same as my application.
AI: Grounding in mixed-signal systems demystified
"Demystified" with small print disclaimers applied :)
Your goal is to contain AC and DC current loops so that the voltage drops they develop across the ground plane impedances aren't adding up to the effective input voltages fed to various inputs (of the gain stages, ADCs, output amps, etc.)
Adding slots or otherwise physically separating the ground planes is just one of many means to achieve such containment.
Any work in this area should first quantify to an extent what sort of currents, frequencies, and parasitic impedances we have to deal with. Then estimate the worst-case "noise" or unwanted coupling from those current loops to sensitive nodes. Then decide how to route those currents so that they don't cause problems. Such routing and containment may involve adding cuts to ground planes, relocating components, overlapping current loops so the fields they produce cancel out (even if partially), etc.
should digital ground plane be the ONLY one connected back to PSU ground
Usually, but it really depends on what other connections there are. A system block diagram depicting all external I/O to your system (power, digital and analog) is necessary to provide any guidance in this respect.
The starting point is always a single uninterrupted ground plane and well identified current loops that are arranged to circulate locally, affecting only small areas of the reference planes (be it ground or power).
You may often find that the performance is well within the requirements (which you didn't state, BTW).
Such a plane necessarily has the lowest bulk impedance between any two points. As you start removing copper, you trade off impedance for potential containment and isolation of interfering signals. As with all trade-offs, if you can avoid it, so much the better.
Should the analog LDO (i.e. A_VDD power supply) be placed over the digital ground plane area? Over the analog ground plane area? Or neither, and with a trace from one of the above, but with no ground pour below?
The LDO can straddle the two planes. The connection between the two can be made at the ADC, or at the LDO, or somewhere in-between. The LDO should be then very close to the ADC.
With a 2-layer board, such designs become "interesting" and you should not expect the first version to work as well as it ultimately could. In absence of sophisticated modeling tools and battle-hardened intuition, you'll have to experiment and then connect the experimental results to theory, i.e. understand what parasitic impedances play a role and why.
You'll want to set yourself up for precision differential voltage measurements, i.e. build (or buy) a differential wideband preamplifier you can use to measure small voltage differences between nominally "connected" points in your circuit. The preamp bandwidth needs to be solid - 100MHz would be nice, 10MHz would be better than nothing. |
H: Should the top torus of a Tesla coil be cut?
Dosen't the top load torus act like one turn coil in short circuit?
Doesn't it "steal" (decrease) the energy transfer to the arc?
As a load, it has an inductive reactance (1 turn solenoid.) Does this reactance affect the operation of the coil, in a constructive way or bad way? Shouldn't it have a cut (vertical slit) and a circular disc insulator inserted into it, thus our torus is now an open circuit 1 turn coil? Isn't this better? If so, then won't this insulator now act like a capacitor forming a "1 turn-coil" with the top torus an LC tank circuit with a resonance in MHz range? How does this resonance or frequency affect the tens of KHz operation of a standard Tesla coil?
Thus the question arises: Which is better: leave the top load torus standard as a short-circuited 1 turn coil, or cut it?
I'm confused, please help. Total noob. Please have mercy.
Please don't tell me the torus acts as a capacitor. I already know that. Please don't tell me it's called "top load" because it's a capacitive loading of the secondary coil. I already know these things. This is not what I'm asking.
AI: While the top load does form a 'shorted turn', the effect of this on coil operation is minimal.
The shorted turn has low coupling with most of the secondary, and therefore the effect is fairly small.
The shorted turn has very large area and so low losses, and so dissipates little energy. Far larger losses occur in the thin wire and the thin skin-depth of the secondary winding, streamer loading, and the spark gap.
The main effect of the topload shorted turn is to increase the secondary resonant frequency slightly by reducing its inductance a little.
Some coilers do break the topload shorted turn in various ways. Some do make the cut you suggest, but round the edges so as not to cause a corona problem. Some build a topload from a broken spiral of copper pipes (I'll see if I can find a link to pictures). There's little evidence that either improves the performance of the coils in any marked way.
There is so much else on a coil that needs to be got right, that worrying about the topload shorted turn should (literally) be the last thing on your mind. |
H: How to charge two 12V batteries in series with a 12V charger
I have setup the following circuit in my home. There is a logic circuit, (NodeMCU) which operates a relay, which causes it to activate the electromagnet.
simulate this circuit – Schematic created using CircuitLab
All of the stuff here, except for the logic circuit is pretty much DIY, we made the electromagnet ourselves so can't provide a value on how much magnetic field it produces, however, it's just enough at 24V.
The two 12V batteries that you see there are batteries that were salvaged from old electrical scooters, combining them in series made the 24V we required, but to charge the circuit, we only have a 12V charger in our home.
Is there any way to charge those batteries without disconnecting anything? i.e., can we charge this system, while it's in operation? And when there is a power outage, it will continue to rely on the batteries?
Thanks for you help.
AI: Direct answer: No, it's not possible.
Let me further explain, as you said, the series connected batteries generate 24V voltage output. If your charger is 12V, the series connected batteries is going to create a voltage difference of 12V (which is 24V from batteries - 12V from charger) in the charger itself, which could result damaging the charge in case of no internal protection/cut-off circuit.
Solution 1: You could create a control circuit so when the charger is connected the batteries enter in parallel state with transistors and/or mosfets but would be a relative much more complex circuit design.
Solution 2: Change the charger to 24V instead, than you need to check how much current does the charger provide versus how much the system consumes (batteries charging + the electromagnectic) to see if the charger can support while the system is on.
Really important advise: when your using magnectic coils, you must place diodes in parallel to protect your source. The coils discharge currents can be really high, damaging all the circuit connected to it. So place a diode in parallel to the input of relay 1 and another to the "electromagnet" itself with reverse polarization. This means that the diodes cathode is connected to positive and anode to negative for both cases, like this image below to assist your implementation.
Just more advises: as you don't specified what the "electromagnet" really is, if it's acting like an inductor you need to set the maximum current, or the batteries could be shorted, resulting in permanent damage or even explosion. So add a resistor in series to limit the current going to the coil, show in the image as well.
Assuming your battery isn’t intelligent, if their voltages drop below certain level they can be permanent damaged. When working with rechargeable batteries, to protect them we need to place a cut-off controller, which basically is a control circuit that will automatically "disconnect" the batteries when they drop below a certain voltage level. |
H: Is there sufficient 'ambient hum' to obtain a 60/120 Hz reference wirelessly?
In a typical urban (USA etc) setting, we seem to be awash in 'powerline hum'. Has it been done, or could it be practical, to recover this to get a wireless reference for a battery-powered system, like a clock, for example?
How rural would a site have to be where this would be completely impractical? How far does 'the hum' extend from high-tension lines, etc?
AI: Thanks to the ubiquity of wireless communications networks, we have excellent frequency references available for relatively nothing.
Mains as a frequency reference is rather poor. That there's no guarantee anymore that the long-term drift should average to zero in reference to standard time.
A $100-class TCXO (temperature-compensated crystal oscillator) would have frequency within 0.1ppm of nominal or better. That's single seconds worth of error accumulated over a year. A $10-class TCXO will have frequency within 0.3ppm of nominal, for well under half a minute of yearly accumulated error. If you keep these oscillators under a constant load, with a well regulated supply voltage, they are about as solid as it gets without going for an atomic clock of some sort.
After soldering the part on the board, assembling the product, powering it up and aging the device for a few months - and rejecting those that drift the worst - you can calibrate out the off-nominal frequency error and get frequency stability within 1ppm for years and years, even for devices that go for $10 in qty 100.
There's no way to beat that with a mains line receiver in most circumstances. Plus, those TCXOs typically have very good jitter and short-term stability as well, so they work very well as frequency references for sensitive radio receivers etc.
You can get a cesium atomic clock on a chip for about $2.5k qty 250. Those hold phase down to a few microseconds per day, or a few milliseconds per year worst-case. If you were making a gadget that cost the equivalent of a Juicero, and wanted a superb real-time clock, you could easily run it off a "low grade" atomic standard. At least you'd get something genuinely useful for the price paid.
Recovering 60Hz line frequency is not really problematic - there's just so much of it, even in rural settings, as long as there's mains in the building or nearby (e.g. on the same lot).
You'd probably want a low impedance coil receiver sensitive to magnetic fields, with some selectivity, and then a preamp, further analog bandpass, more gain, then a high-resolution ADC to find the line frequency peak, isolate it, and drive a software-defined oscillator + PLL to keep in sync with it.
In terms of ubiquitous frequency references that are easy to receive, the cell towers are much better than mains, and I'm talking just about carrier frequency stability, without any demodulation. They typically phase-lock TX to a GPSDO rubidium standard. If you have even an extremely weak signal from a cellular base station, you've got a reference better than most already, better than free-running rubidium clocks. The base station doesn't transmit continuously so very short-term holdover is needed, but low phase noise TCXOs are already good at that.
I'm far from a time nut, so you could get information way better than what I got off the top of my head if you head on over to the Time Nuts mailing list. |
H: How does this old oscillating light board work?
I know EE.SE loves a good treasure hunt, so here's one to noodle on. This device was made in the 1950s or 1960s by a Bell Labs EE. He designed it as a toy to amuse his young son. Said child, now turning 70, has asked a family member (me) how it might work again.
Reportedly, when connected to 96V of dry cells, the device blinks its lightbulbs and changes their pattern and frequency based on the configuration of the rheostats and switches.
I'm having trouble identifying the components, or even how this circuit, which seems to be made of 1uF capacitors and 150k/180k resistors, could ever oscillate.
Thoughts?
AI: This is a classic neon bulb relaxation oscillator. The bulbs themselves have a negative resistance characteristic that allows an RC circuit to form an oscillator. If you connect multiple lamps you can get interaction between them or just make individual oscillators that operate at different frequencies.
More in the GE Glow lamp manual (1965)
The lamps are probably similar to NE-51H/NE-67. |
H: CAN bus idle voltage too low
I'm testing a CAN bus circuitry based on MAX3051. VCC is 3.3V
Two nodes are only connected on the bus. Termination resistors 120 Ohm are on each end of the bus.
When the bus is in idle state (no one is trasmitting). With respect to ground, both CANH and CANL have less than 0.5V (measured with an oscilloscope)
The bus works fine, there's a solid communication between the nodes I don't see any message being lost or anything.
I would expect the voltage on the bus during idle state to be around VCC/2.
Why is it so low?
AI: On Fig 5. of the datasheet, you can see that the chip can only drive the CAN dominant state (pull CANL to GND and CANH to VCC). In the CAN recessive state, nothing is driven and the termination resistors pull the signals to close to 0V of each other, but not to ground or any other voltage. This is fine, as "The MAX3051 input common-mode range is from -7V to +12V" (in the datasheet). CAN BUS Wikipedia page: "the recessive common mode voltage must be within ±12 of common".
There are other standards like RS485, where the common mode voltage does matter, and there are extra resistors at the terminations to pull the lines to VCC/2 in the recessive state. |
H: How should I go about overriding or switching between an internal control voltage source when an external source gets connected?
For example, I'm using a potentiometer to modulate an internally generated +5v audio signal, and I'm sending the output signal down to the next part of my circuit. If I were to add an external input to the circuit, how could I make sure the circuit completely ignores the potentiometer output and instead feeds from the external voltage input?
I thought of maybe isolating the ground on the external input and using a comparator and some transistors to switch inputs if the voltage to ground on the input changes (indicating a live connector is plugged in). That solution seems very unreliable and complicated to me, though. Am I on the right track, or is there an easier solution?
AI: The easiest thing here is to use a jack specifically designed for this purpose. If you use a 3.5mm plug on the external cable, you can buy a 3.5mm jack that will short your internal (potentiometer) signal to the tip input until the plug is inserted. The displacement of the plug breaks the internal connection and uses the external one instead.
Thanks to CUI devices for the image.
If you want an electronic solution, you'll have to tell us more about the characteristics of your external input |
H: Help identifying an SMD chip
I have stumbled upon this mystery chip, it is probably a voltage regulator. It is inside a projector I want to repair. From what I gather, it's probably a 1.8 volt regulator.
If you can confirm please go ahead. The package is SOT-223.
AI: Dii is the logo for Diodes Incorporated.
At first, "4aG" looks like a possible SMD marking code as this shows several 1.8V regulators, but none by DI. Checking DI datasheets, it looks more like "17-18" is the device marking, "18" being 1.8V and "17" being the model. The only DI datasheet I could find which shows this naming format is this one page 10:
So it is definitely a DI linear regulator, 1.8V, model "17", in a SOT-223 package. But none of the models ending in *17 have a datasheet which matches this marking.
Could be that the device was custom-made or is outdated, relabeled, etc.
Chances are good the 63-18 could be an equivalent, but you'll want to make sure the pins are in the same order first of course. |
H: How do electronics (like transformers) convert high voltage, low current to low voltage, high current, and vice versa?
I am fairly new to tinkering with electronics/electronic-adjacent ideas and I have been wondering: how do electronics get the desired voltage/current to operate properly? I understand in AC power, there are step-up and step-down transformers that allow you to raise or lower voltage, but what about the current? The additional voltage can't come out of nowhere due to thermodynamics, so does it convert the current into voltage? And does that work in the opposite way?
So, for instance, say you had a power source of 12 watts (24 volts, 0.5 amps) and you had a device that needs 2 amps. Would a transformer essentially convert the (24 volts, 0.5 amps) to (6 volts, 2 amps)?
Alternatively, if you had a power source of 12 watts (6 volts, 2 amps) and a device that needs (24 volts, 0.5 amps), would a transformer essentially convert the (6 volts, 2 amps) to (24 volts, 0.5 amps)?
I understand conversions would have losses, but these are just hypothetical examples.
If I am mistaking how transformers work, what electrical components and/or circuits can accomplish the functionality stated above?
AI: It's Faraday's law:
Whenever the flux linked or associated with a circuit changes, a voltage is induced in the circuit.
Primary side creates a alternating flux in iron core. Core connects flux to secondary side. A voltage gets induced on secondary side. The voltage on the secondary side depends on turns ratio between primary and secondary.
A transformer cannot transform power. To get 100kVA out of an ideal transformer, 100kVA must go in.
$$S_1 = S_2$$
$$V_1 I_1 = V_2 I_2$$
$$\frac {V_1} {V_2} = \frac {I_2} {I_1} $$
$$\alpha = \frac {V_1} {V_2} = \frac {N_1} {N_2} = \frac {I_2} {I_1} $$
An \$\alpha\$ < 1 is a step-up, \$\alpha\$ > 1 is a step-down and \$\alpha\$ = 1 is an isolation transformer. This refers to the impact on the voltages, because the impact on currents will be opposite.
A step-up transformer steps up voltage, but steps down current. |
H: How do we find the current in this circuit?
How do I find the current drawn by the 11V in this circuit? I understand that the circuit can be redrawn but I do not understand the simplifying procedures to do so.
AI: There are no general rules for simplifying circuitry like this.
Look for places where there are components in series or in parallel. Simplify them to a single component.
Look for components directly in parallel with voltage sources - that component can only affect the current in the voltage source but not affect any other part of the circuit, in many cases it can be removed as it won't affect the goal of the analysis.
In circuits for simulation look for ideal current sources in series with any other component - that component will not affect the current and can removed and shorted out.
For this particular one we can use symmetry to determine nodes of the same potential. Any resistors across those nodes can be replaced by a short-circuit (or an open circuit if appropriate) as that won't affect the overall result.
By observation the voltage at each end of the resistors marked R2 are the same. If we replace each of them by a short circuit it can be seen that the top two R1's are in parallel and so are the bottom two R1's so can each be replaced by a resistor of R1/2.
The middle section consists of two R1's and two R3's all in parallel. The can be replaced by a single resistor of equivalent value equal to R1//R1//R3//R3. (// meaning in parallel with).
The two R1's at the bottom can be replaced with a resistor of value R1/2.
The circuit has been reduced to three resistors in series R1/2 + R1/2 + (R1//R1//R3//R3).
The current can then be easily calculated.
It would be easier to describe if every element had a unique reference. |
H: High Voltage DC Motor (AC rectification) Relay, Current Limiting, and Smoothing Capacitor
Looking at controlling a HVDC motor like so -
Type DC781 (2) LSG
Operates on 6 - 220 VDC
84 ohm terminal - terminal resistance
No load speed of 8000 RPM @ 220 VDC - 100 mA
Stall current at 220 VDC: 2.6 A
I would be using 120VAC input however
My questions are can I turn this on and off with a relay? What would be the best location for this relay, to the motor or from the AC connection? I imagine there may be issues with the inrush current.
Basically can I use this circuit replacing the "Power Switch" with a relay like this - https://www.digikey.com/en/products/detail/panasonic-electric-works/ALZ11B05W/645567
A rectifier (400V rating, 4A) - https://www.vishay.com/docs/88655/kbl005.pdf
Also what capacitor would I need or do I need it at all? Quick napkin calculation implies I'd need a huge cap, but I imagine a ripple doesn't really matter for a motor like this? Is this accurate that I can skip the cap entirely? And since 170VDC peak from rectifier averages back out to ~120VDC at 50-60Hz anyway? Correct?
And lastly selection of a thermistor to limit inrush current. I am unsure on what parameters to select here, I imagine just current.
AI: Capacitive loads tend to cause relay contacts to weld when closing
unless torque-ripple is a problem you really only need a capacitor large enough to suppress the electrical noise of the motor. (100nF perhaps)
A spinning DC motor itself acts much like a capacitor - do some experiments at low voltage if you don't believe me.
Recitified 120V sine wave DC will act like about 160VDC when connected to the motor, but that's ok because your motor is good up-to 220V
Given the numbers presented that relay looks suitable, and you'll no doubt find it convenient that the coil terminals are distant from the switch terminals.
if you don't use a large capacitor you won't need to worry about inrush (unless the 2.4A stall current is a problem)
if you do use a large capacitor you'll probably need to upgrade the rectifier, and may want to consider adding a choke between the rectifier and the capacitor to help with power factor.
The best location for the relay is on the AC side of the rectifier.
if you don't like the relay option 2.4A is not much current so a solid-state solution should not be too expensive some sort of transotor or SCR controlling the DC supply. |
H: Does ARM gdb not have TUI layout support?
If it run
arm-none-eabi-gdb -tui myProject.elf
This returns
TUI mode is not supported
If I run it without the -tui flag and try to call lay or layout in gdb it does not recognize the command, however if I run normal gdb not the arm-none-eabi-gdb layout works but of course I get an error about unknown architechture "ARM"
So, is there a way to step through my C code using GDB and have a more visually appealing experience? Aside from using an IDE
AI: The TUI feature is optional upon building GDB. So it probably depends who compiled your arm-none-eabi-gdb.
There is a configuration option --enable-tui that needs to be set in order to build GDB with TUI.
It is mentioned in the GDB documentation.
If you didn't and don't want to build GDB yourself, you could ask your provider about the feature. |
H: Logic gates propagation delay
I'm studying digital circuits and I have a question about the propagation delay of a logic gates. I've read that propagation delay is defined as the time required for the output to reach 50% of its final output level from when the input changes to 50% of its final input level.
But what means to "reach 50%" of a logic value like 1 or 0? I know that each logic values are represented through different voltage level, but for me is still difficult to understand the concept of "50%". 50% of what?
AI: It's important to always remember that digital electronics is just 'convenient analogue' electronics.
The logic input stage and logic output stage are made from transistors, as shown below. As such, they have switching time that gives the stage a transition (change) time.
When a logic output goes from driving LOW to driving HIGH ('push phase' in diagram), that output waveform will have a rise time. How fast that is depends on the driver itself and the load on it: PCB track/pin capacitances that need charging, loading current etc. The more it's loaded, the slower the rise time.
Similarly, when going from HIGH to LOW ('pull phase' in diagram), the output will have a fall time. Those same load capacitances needs discharging and a loading current will need sinking.
So the transitions are measured from specific voltage points along those rise times and fall times. Here, your 50% refers to when a logic output crosses the 50% of its gate supply voltage, on the way up when rising or the way down when falling. See the waveform diagram below. |
H: Can someone help me understand this code for the ADXL345 accelerometer?
I'm trying to learn how to read the angles measured from an ADXL345 accelerometer (Adafruit) for a beginner project where the sensor will be attached to a short stick and I want to read the angles I'm holding/moving the stick at. I have a couple questions about some code I'm trying to understand (from: https://howtomechatronics.com/tutorials/arduino/how-to-track-orientation-with-arduino-and-adxl345-accelerometer/):
#include <Wire.h> // Wire library - used for I2C communication
int ADXL345 = 0x53; // The ADXL345 sensor I2C address
float X_out, Y_out, Z_out; // Outputs
void setup() {
Serial.begin(9600); // Initiate serial communication for printing the results on the Serial monitor
Wire.begin(); // Initiate the Wire library
// Set ADXL345 in measuring mode
Wire.beginTransmission(ADXL345); // Start communicating with the device
Wire.write(0x2D); // Access/ talk to POWER_CTL Register - 0x2D
// Enable measurement
Wire.write(8); // (8dec -> 0000 1000 binary) Bit D3 High for measuring enable
Wire.endTransmission();
delay(10);
}
void loop() {
// === Read acceleromter data === //
Wire.beginTransmission(ADXL345);
Wire.write(0x32); // Start with register 0x32 (ACCEL_XOUT_H)
Wire.endTransmission(false);
Wire.requestFrom(ADXL345, 6, true); // Read 6 registers total, each axis value is stored in 2 registers
X_out = ( Wire.read()| Wire.read() << 8); // X-axis value
X_out = X_out/256; //For a range of +-2g, we need to divide the raw values by 256, according to the datasheet
Y_out = ( Wire.read()| Wire.read() << 8); // Y-axis value
Y_out = Y_out/256;
Z_out = ( Wire.read()| Wire.read() << 8); // Z-axis value
Z_out = Z_out/256;
Serial.print("Xa= ");
Serial.print(X_out);
Serial.print(" Ya= ");
Serial.print(Y_out);
Serial.print(" Za= ");
Serial.println(Z_out);
My questions are regarding the section that merges the two values for each axis:
X_out = ( Wire.read()| Wire.read() << 8); // X-axis value
X_out = X_out/256; //For a range of +-2g, we need to divide the raw values by 256, according to the datasheet
I understand that there are two values for each axis, but how does this expression merge them correctly? How would you know how to merge them as I can't seem to find an explanation anywhere as to why there are two values and what each represents?
I've also seen other articles (e.g: https://morf.lv/mems-part-1-guide-to-using-accelerometer-adxl345) that talk about how you need to choose a resolution and range for the sensor, and those will correlate to a number in mG/LSB? So what does all this mean and how would I go about choosing them?
AI: The 16-bit values are transferred as 8-bit bytes so one of the bytes contains the high 8 bits and the other contains the low 8 bits.
Just like us humans understand the number 42 as containing a digit of 4 that represents tens and a digit of 2 that represents ones, we get the value 42 by taking 10×4 + 1×2, or just by concatenating the 4 with 2.
This concatenation is exactly the same with binary digits and bytes, the byte that contains the high 8 bits is moved 8 bits left so the low 8 bits can be added to the final value. Shifting left by 8 bits is same as multiplying with 256.
Since floating point values are used and the value of 256 read from the chip equals 1.0, it needs to be divided by 256.
You have 16 bits to represent a value. If you want it to represent a large value between say +/- 16, you need more bits for the integer part, so less bits are left for the fractional part, so there is less resolution as the steps are larger. If you only want to represent a small value between +/- 2, you need less bits for the integer part, and have more bits left for the fractional part.
Basically again same than having say 4 digits to represent a number, and you can select where your decimal point will be. So you can have a large number range between 000.0 and 999.9 with steps of 0.1, or you can have small number range between 0.000 and 9.999 with steps of 0.001 which means more resolution. |
H: MOSFET gate area?
I have a transistor with constant Vdd voltage but my W, L parameters are decreasing - what happens to my gate area?
AI: Please read the anwser of Mr. Abdelhalim abdelnaby Zekry in this post (https://www.researchgate.net/post/is_w_l_ratio_of_mosfet_can_effect_the_efficiency_of_the_circuit) about a similar question:
Dear Wafa, The W/L ratio is the major parameter in the hand of the
engineer to adjust the current in its transistor.
This can be easily understood from the current equation of the MOS
transistor in the saturation mode of operation:
ID= (Kp W/2L) (VGS- Vtn)^2
where Kp is transconductance parameter and Vtn is the threshold
voltage. The other symbols have their usual meaning.
VGS is normally limited by the power supply voltage for a given
feature size.
So it remains to increase W/L to increase the transistor current.
Increasing the transistor current keeping its VDS constant increases
the power consumption in the transistor.
One normally choose L minimum to reduce the area required to achieve
certain current and increase W as required.
Decreasing the area is also required for to reduce the MOS capacitor
to achieve high switching speed or high operating frequency.
For more information on the MOSFET please see the book in the link:
https://www.researchgate.net/publication/236003006_Electronic_Devices |
H: How to set VHDL entities internal latches for testing purposes
How do I set internal/private latches deep inside entities for testing purposes ?
simple example
I have an entity deep inside my architecture which I cannot easily manipulate with an internal signal counter which acts as a latch I would like test in a testbench. I would like to start the test with the counter at a specific value.
library IEEE;
use IEEE.Std_logic_1164.all;
use IEEE.numeric_std.all;
entity thingy is
port (
clk : in std_logic;
S : out unsigned(7 downto 0)
);
end entity;
architecture thingy_arch of thingy is
signal counter : unsigned(7 downto 0) := (others => '0');
begin
S <= counter;
process (clk) begin
if rising_edge(clk) then
counter <= counter + 1;
end if;
end process;
end architecture;
I have tried to use : alias internal is << variable t.counter : unsigned(7 downto 0) >>; to drive it during testbench process but it only drives the latch to "XX…X".
AI: Examining internal signals
You have to bring internal signals of interest out on top-level ports for a testbench to be able to access them.
This is a strength of VHDL. Architecture contents cannot be accessed by other architectures, only through the ports made public in an entity.
You can add a wrapper top-level entity/architecture around your current top-level design entity. The latter would become the second-level entity. The top-level entity would contain ports for the pins of the target device only. The second-level entity can contain the same ports plus your test interface ports. The testbench would instantiate the second-level entity. The synthesis tools would use the top-level entity and all below.
The downside of the wrapper is that you're not simulating the actual full design. This may be insignificant in a personal project but it's not allowed for design qualification by many companies/organisations because you're not testing the full design.
You can also have top-level ports for test values that go to unused and unconnected pins on the real board. Make sure you enable internal pull-ups on such unconnected input pins.
Changing internal signal values
Changing an internal signal cannot be done from VHDL unless you add a mechanism to your design to do so from the top-level ports.
You can take a register's value from a constant defined in a package. Then you can have two versions of the package, one for synthesis, one for simulation, with relevant values for testbench or target device. It doesn't allow for testbench control of the values, though. Again, formal design qualification procedures may not allow this.
In ModelSim, you can use a force command to change internal flip-flop values. But its execution would have to be synchronised with the right moment(s) in your testbench's execution, which usually makes it impractical.
Don't use default signal values
You should never use initial values for signals and instead implement a reset, as explained in this answer. If you add a reset input port, that can be controlled by the testbench. |
H: Schottky Diode configuration
I'm studying some schematics about Google Coral Dev Board, and I have found this:
Where VBUS_1 comes from USB-C OTG connector and VBUS_VIN comes from USB-C Power only connector. What is the resistors 0R target? I understand that diodes block return way current, but that resistors let current flows.
AI: The DNP probably stands for do not populate, so there might be different variants around during assembly - or there were during development until it was settled to just the one variant being sold now.
Based on this schematic I can think of up to five configurations making sense (without knowing exactly what USB_VIN and USB_VBUS are):
Populate:
Only R26 -> device is powered from USB_VIN
Only R176 -> device is powered from USB_VBUS
Only D12 -> device is powered from USB_VIN (minus diode drop), VIN is protected from getting reverse currents
Only D5 -> device is powered from USB_VBUS (minus diode drop) VBUS is protected from getting reverse currents
D5 and D12 -> device is powered either from VIN or VBUS (minus diode drop) depending on which is higher ("diode or" of two power supplies) both are protected from reverse currents
Populating both resistors makes no sense, as you might run into shorting VBUS to VIN. Which could destroy one or both supplies.
Populating a 0 ohm resistor and a diode in parallel makes no sense as the diode wouldn't be active.
Note: when the resistor wouldn't be 0 R, the analysis would change - so a resistor parallel to a diode might have some value in some other circuit. |
H: What is this technological device?
Today at work, I came across this device. One of the black lids has the inscription PGEP79. A small amount of colorless liquid was contained inside the transparent casing. Overall, it looks like an electrical device, so that's why I decided to ask the question here. If this community does not tolerate such questions, kindly direct me to an appropriate forum. Thank you.
Example photo:
AI: It's a used cartridge from a vaping device. There is an electrical heater inside that vaporizes the liquid so the user can inhale it.
You should be able to measure the heater resistance if you have a multimeter- it should be around a couple ohms or less (down to less than 0.5\$\Omega\$). |
H: Thermionic cathode heater filament conundrum
An old CRT I have specifies 6.3 V (DC, I believe) on the heater filament, resulting in a 0.6 A filament current (spec).
But I measure the filament resistance as 1.8 Ohm.
At V = 6.3 V that would result in a 3.5 A filament current, about 6 times too high!
I'm tempted to measure the filament current with a 6 V DC battery pack but I'm afraid to burn the filament?
Any advice is appreciated!
AI: 1.8 ohms is the cold resistance; it will increase as the filament warms to operating temperature. |
H: A question about gain setting of a DAC versus its supply voltage
I want to power this DAC chip with 3.3V using its Vdd pin. If Im not wrong, specs says nothing about output voltage vs supply voltage Vdd and the following formula is given:
The chip uses a fixed 2.5V reference voltage and one can set the gain.
Is that possible to obtain 5V output(Vout) by suing 3.3V Vdd? Can 2.5V be doubled to 5V even though Vdd is 3.3V? Can Vout be more that Vdd?
(Vdd is not included in the formula as a condition)
AI: No, not even close. You can get (at most) 3.3V out of the device with a 3.3V supply and gain of 2.
If you need 5V you can use a DC-DC converter or charge pump to create a higher voltage rail and use a 2:1 op-amp amplifier to double the 2.5V out.
Generally only very special devices (such as those with internal charge pumps and external capacitors) can produce higher voltages than their supply rails, and this is not one of those.
The datasheet perhaps could be more clear on that, by my interpretation, it says that you can get 3.0V out with a 3.3V supply when sourcing up to 10mA.
Typically 3.2V (but you can't depend on that). With lighter loading it should be able to get closer to the positive rail. |
H: I'm desiging a class AB amplifier with class A preamp, what are the problems with my design?
I'm attempting to design a class AB guitar amplifier with a class A preamp. I'm a novice and this is the first actual project I'm attempting.
What are the possible caveats, as well as my choice in MOSFET? Should I include a feedback loop into my preamp to reduce crossover distortion? I'm going to drive an 8 ohm speaker.
Should I change anything in this circuit if I were to power it off a 24v 2A supply?
Edit: I ended up redesigning it with a class B preamp for efficiency, I don't know if its any better, but it seems to work
AI: If you choose FETs with Vgs(th)= 2 to 4V then a RED or Yellow 5mm LED will work even with an LM741 as simulated below. A CMOS Op Amp will give more power to the FETs. The FETs must be low RdsOn preferably 1% of speaker load, but will still need a large heatsink with insulators + thermal grease.
This was simulated with an LM741, but there are rail-to-rail CMOS types with more output range. The gain is 1M/10k = 100 which could also be 10M/100k or any ratio you need which affects BW = GBW/Av e.g. GBW=2MHz/ 100 = 20 kHz BW.
Using diodes or a 2V LED with low current to get 1.5V to match Vt threshold of the FETS , more current from the 10k current limiter will raise Vgs and thus Id quiescent current and heat loss in FETs. I used Vt=1.5 Beta=20.
The large triangle wave 100 mVin 10V out resulted in 8.2W avg speaker power and 3.4 W avg. loss per FET thus 8.2Wout/15W total for 55% efficiency with about a 10kHz BW.
Bass-boost or mid-range cut can be added later unless specified. |
H: Controlling brightness on multi-digit 7-segment display
I'm designing a velocimeter and I'm having doubts about the display's brightness control.
I have an ATTiny 2313 microcontroller connected to a CD4511 that drives a common-cathode 3-digit 7-segment display. On the MCU, 4 GPIO pins set the digit pattern and 3 GPIO pins select the digit through the transistors.
I'd like to have a potentiometer to control the display brightness. My first thought was to connect it to the display cathode, but each number in a same digit will be of different brightness.
Second thought was to connect it to the CD4511 5V pin, but I don't know if it will mess up the internals (the minimum is 3V).
Does anyone know a better way?
AI: The answer to
how do I control brightness of LEDs if I have microcontroller
almost universally involves PWM.
In your case, turn the STROBE input of the CD4511 on and off quickly, typically using your ATTiny's PWM unit.
If you need a poti to control the brightness, use it to generate a DC voltage, and use the built-in PWM comparator of the Attiny to convert that voltage into a PWM duty cycle.
I might add that using a microcontroller with an ADC might be easier. Replacing the very weak and feature-poor, but pretty expensive Attiny2313 with a microcontroller that has 10 free GPIOs (instead of 7 you need) would immediately allow you to get rid of the CD4511; it really makes no sense to use a BCD-to-7-segment IC in 2022, your microcontroller is perfectly capable to convert numbers to segments to light up, this isn't the 1960s. BCD is not a native format for any electronics these days. |
H: What width should I use in the dimension layer in Cadsoft Eagle?
I've designed about 20 PCBs in Cadsoft Eagle, and for the board outline I've always used the "20 Dimension" layer, using "wires" of width 0 to enclose the PCB. I did this regardless of whether the PCB was rectangular (thus it could be separated from the panel by V-cuts), or non-rectangular (necessitating break routing).
As an example of non-rectangular PCB, see the part below. I've pressed "properties" on the long arc next to the dialog:
As you can see, the width is 0.
I've always assumed that
A) the dimension defines what is the "inside" of your PCB shape, e.g. if you define your Dimension layer as a polygon (0", 0") -> (1", 0") -> (1", 1") -> (0", 1"), you should receive a PCB that is exactly 1-inch-sided square, differing up to manufacturing tolerances (and, importantly, the same size should be produced by either scoring or break routing)
B) this definition is a bit abstract, and the PCB fab is expected to figure out the details. E.g., if they use 2mm break routing, they'd need to extend this shape by 1mm (essentially a Minkowski sum of my shape + 1mm circle) and that would give them the path for their router. They'd also need to figure out where to place the break tabs - it's not "my job" to select this.
If they used 2.4mm break routing, they'd need to extend by 1.2mm. In any case, it seems wrong to assume what exact steps would the fab do to prepare my board for their manufacturing process.
In a debate with a coworker, he said my use of 0-width dimension is wrong, because the gerbers it generates are also a bit wrong, as e.g. CAM350 would complain about the zero width. One easy fix is to increase the width to e.g. 0.25", but this is not a no-op, as it actually shrinks the PCB a little. So in essence, he said I should fix my design by doing the Minkowski sum myself, using some non-zero width, if I wanted to get my PCBs the way I initially envisioned them.
I'm not convinced by his arguments, because
the warning message could be just a peculiarity of CAM350
I've sent at least 10 such designs to batching fabs like OSH Park and Ragworm, and I've never received a single complaint that my PCBs are not well-specified (but maybe they are lenient to noobs)
Questions
Are my assumptions A & B correct?
Is dimension of width 0 allowed / normal / accepted in the industry?
AI: Talk with your PCB manufacturer.
Gerber files are for photoplots of copper layers, silk screen with part numbers and solder resist.
Excellon or Sieb und Meyer drill files are used to drill the board at the defined positions and with the drill diameter needed to get the hole size you want after copper plating.
For the board outline router files are needed. It is common practice to correct the router path by the router radius outwards. It is the job of the manufacturer to select a router with a proper diameter, for instance 2 mm. To produce the proper route file a line width of 0 is allowed, normal and accepted. If gerber files are submitted they have to be transformed into a router file format needed by the routing machine. Routing machine controllers are capable to correct the path with the router radius. There is a tool definition file containing the router diameter, the router rotation speed, the move speed in horizontal direction and the router feed speed in vertical direction. |
H: GND related - maybe off topic
Situation - listening to music via Bluetooth using an integrated amplifier.
I removed the GND wire from my pick up/turntable wich was connected to the amplifier. While removing it from the amplifier the end of the wire removed from the turntable touched the body of the amplifier and the sound stared to take some breaks, like losing signal from Bluetooth until I turned the amplifier off.
Question - Does that GND wire could have anything to do with the result in the way the music played?
Thank you!
AI: Absolutely! GND is the reference to your voltage, without GND your not going to have voltage refence therefore no voltage at that stage.
Don't mess with GND and/or Vcc in the circuit if you do not know what your doing, specialy hitting everything with the cable and when your powered on.
Now you gotta have faith and hope that you didn't fried your amplifier after properly reconnecting. |
H: Is there a way to make a DC to low voltage AC circuit
All the circuits I see online are like 12v DC -> 220v AC or something along those lines but I'm only looking for something like 9v DC to 9v AC. Is this possible?
AI: There are many ways to generate AC from DC. It all depends on what you want or need.
Basically every audio amplifier generates AC from DC. From the tiny amplifier in your mobile device of choice to your stereo set in your livingroom. They even do it with variable amplitude and frequency.
Also, oscillators for generating frequencies (used as clock for a CPU ar anything else) are generatring AC (often with a DC-Offset) from DC.
BLDC motor drivers are also generating AC from DC (mostly in the range of safety low voltage). So AC to DC conversion is all around you all the time.
In the simplest case it's just switching on and off the DC at a given frequency. From there you can go fancier by making it bipolar (positive and negative voltages) and giving your AC waveform a shape other than rectangular (like for example a sine wave form).
The circuits to generate high voltage AC or low voltage ar not that different. If you want low volatage you basically just leave out the part the increases the voltage (and maybe some feedback components)...
simulate this circuit – Schematic created using CircuitLab
This is a very simple oscillator (the inverter should ideally be one with a schmitt-trigger input). This gives you a rectangular AC between Vcc and GND with a frequency that depends on the value of the capacitor and the resistor (the choosen values would lead to a frequency of around 1kHz)
simulate this circuit
This circuit should generate a symmetrical tringular wave output.
And if you want to know how to make a sine wave from a triangle, I can only suggest reading Ken Shirriff's great blog post on reverse engineering an old function generator circuit. |
H: Is full wave rectified AC, without a smoothing capacitor, okay for switching a DC solenoid and a DC relay?
I have a circuit where the 24V AC secondary voltage of a transformer is full wave rectified to DC.
The only parts that this circuit is connected are some 24VDC solenoids and a 24VDC relay. The solenoids and relay are controlled by digital pins of a microcontroller through the optocoupler circuit below by another secondary of the transformer (9VAC), regulated at 5VDC.
If I put a smoothing capacitor after the diode bridge the 24VAC voltage gets too close to 35VDC after the capacitor. Then I'd have to use a voltage regulator that accepts more than 35VDC as input to get back to 24VDC.
Could I get rid of the capacitor and the voltage regulator, so the AC-DC conversion would get close to 21.5VDC, and not 35VDC (not smoothed) and then use this 21.5VDC to power this circuit above that then drives the relay and solenoids?
Could this circuit above work normally with this non-smoothed voltage at the "6-28VDC Power Supply"?
AI: Is full wave rectified AC, without a smoothing capacitor, okay for switching a DC solenoid and a DC relay?
Yes.
The average DC voltage would be only 90% of the RMS voltage (24 * 0.9 = 21.6 V in this case).
It would work because the 'pull-in' voltage of the relay would be maximum 70% of it's rated voltage and it's 'drop-out' voltage minimum 10%.
Likewise with the solenoid. |
H: Why do these resistors decouple the two databuses?
The ZX Spectrum is a computer with a Z80 accessing ROM and two separate areas of RAM, one of which is also accessed by a ULA which generates video. If the Z80 wants to access the video RAM, it may be slowed down to fit in with the timing imposed by the ULA.
This means
The Z80 has a bus connecting it to the three areas of memory.
The ULA has a bus connecting it to the video RAM.
These two buses are connected.
These two buses are independent.
According to this web-page there are resistors decoupling the buses. I need to understand why this is possible.
The ULA with the lower 16K of RAM, and the processor with the upper 32K RAM and 16K ROM are working independently of each other. The data and address buses of the Z80 and the ULA are connected by small resistors; normally, these do effectively decouple the buses. However, if the Z80 wants to read or write the lower 16K, the ULA halts the processor if it is busy reading, and after it's finished lets the processor access lower memory through the resistors. A very fast, cheap and neat design indeed!
By my understanding, these resistors somehow mean that the ULA can read a byte from the memory (the video RAMs are driving the bus) at the same time as the Z80 is reading or writing the bus (i. e., the Z80 or the DRAMs are driving the bus).
So let's take some examples:
The CPU wants to read from the memory. The ULA determines that this read is not going to interfere with the ULA (perhaps the read is for a different area, the upper 32K or the ROM, to which the ULA has no access) so the ULA does not assert /WAIT, and the read goes ahead.
The CPU wants to read from the memory. The ULA determines that this read will interfere with video generation, so it asserts /WAIT and the read is delayed.
In the first case, the CPU has put an address on the bus, and the ULA must see this to determine whether or not to assert /WAIT. Why doesn't this interfere with the fetching of the video data?
If the ULA did not assert /WAIT, it's either because the bitmap is not being fetched at this time (maybe horizontal retrace or something) so that case is not interesting, or it could be because the CPU wanted to access the ROM or another memory. In this latter case, both video memory and either the Z80 or the ROM or the DRAM are driving the databus. Why don't these subsystems interfere with each other then?
Maybe my whole question boils down to "how does decoupling work?" because when I've seen decoupling in the past, it's involved capacitors not resistors.
AI: First, note that the ZX Spectrum ULA doesn't drive the Z80A MPU's /WAIT pin. Instead, the ULA drives the Z80A CLK pin and holds that HIGH to stop the Z80A.
The series resistors weaken the drive current from the Z80A output pins to the RAM ICs. The ULA drives the RAM address pins directly, with a much stronger current capability than that from the Z80A series resistors.
The effect of these weakening resistors is:
When a Z80A resistor drives a pin HIGH, the ULA can also drive the pin HIGH or sink the resistor current to GND to drive the pin LOW. ULA wins.
When a Z80A resistor drives a pin LOW, the ULA can also drive the pin LOW or source the resistor current to VDD to drive the pin HIGH. ULA wins.
By using these two techniques together, effectively the Z80A has been disconnected from the bus briefly (ULA outputs overpower resistor-weakened Z80A outputs) and is sleeping (CLK stays HIGH).
And when the ULA is not using the bus, it takes its own pins to high-impedance to effectively disconnect itself from the bus.
Remember that the ULA needs CLK-immediate RAM access for display-synchronised memory reading with no read FIFO. So the Z80A /WAIT pin cannot be used. /WAIT doesn't just stop the CPU whatever it's doing. It only works during a data transfer i.e. an instruction read or a read/write to memory or I/O. Too much of the time isn't during a transfer.
Equally, the Z80A /BUSRQ and /BUSACK pins cannot be used. /BUSRQ requests that the Z80A tri-state and relinquish its bus then assert /BUSACK but this will happen at the end of the current instruction. So /BUSACK may go LOW over 20 CLKs after the /BUSRQ requested it. |
H: Why is my best placement of a directional antenna so weird?
I moved to an offgrid house with a 3G/4G repeater, with a short, enclosed directional antenna. It was wobbling and poorly mounted on an unstable wood plank at the base of a window, so I decided to fine-tune its placement.
I checked maps and azimuth for nearby public GSM antennas. The only realistic, nearby one (at about 2km) is indeed right in front of the window, however there is a small hill in between. Others are much farther and way under the horizon.
To my surprise the best result I got is when my antenna is the closest to the window glass, and more surprisingly, my SNR even improved when I placed it parallel to it, pointed towards the concrete side wall of the window, not towards the GSM pole which should be pretty much at right angle.
The concrete probably has rebar so it might create a loop, but I imagine it would certainly be grounded.
This really makes no sense to me and I am looking for an explanation (beyond black magic.)
My only guess is that the triple-glazing wooden framed window might be metal-coated, and it would "amplify" the signal, that the directional antenna catches laterally? Does this even make sense?
I know just enough about radio engineering so as not to trust myself too much and keep a low profile, so I also cowardly checked every angle with the antenna mounted on a tripod on my terrace (same height as the window,) with 5° increments.
No way, the best placement is the one shown on the picture.
AFAIK it is an LDPA antenna since it catches may wavelengths, not a Yagi.
The coax cable between the repeater and the antenna is too long. It was stacked vertically along the frame. Worse, it was originally coiled. I use folds instead to avoid loops, but I am not sure it has any impact on the direction (only noise/loss, which I admit was barely noticeable.)
AI: My guess it that your thick concrete wall opening backed by window glass, is acting as a cavity-backed aperture antenna. This type of antenna is more conventionally designed to be sharply resonant, with highly conductive metal sides and back. However, in your case, the reinforced concrete and metallized glass are lossy materials, and the dimensions are several wavelengths long, so it's not so narrowband, but the large aperture helps to make up for the inefficiencies.
As others have pointed out, your antenna is a short log-periodic and has very modest directivity. It's really just functioning as a broadband dipole, so pointing it sideways rather than towards the GSM tower doesn't cause much loss. I think you stumbled on (or, by subconscious genius, selected) a good location for this to serve as an exciting element for the large cavity antenna.
If you're up for more experimentation, you could try moving the antenna up and down along the side of the window aperture. |
H: Is there a PID algorithm for "single point" targetting?
I've got a simple control problem that I've tried to solve with a simple algorithm so far, but am wondering if there's a more sophisticated algorithm.
I have a pump hooked up to an Arduino that is feeding water into a vessel on a scale. My goal is to dose a specific amount into the vessel based on the scale feedback. So far I've got a few tolerances defined and am slowing down the motor to specific speeds as I approach the target.
This algorithm works just fine for a few pumps (I've got 4 different pumps and I'm controlling each one after another). However, some pumps require a bit more power than others in order to pump liquid at a slow rate. Rather than trying to tune my simple algorithm for any new pump I'm using, I'm wondering if there's an algorithm that will naturally slowly ramp down the power (PWM) on each motor.
I know PID is generally meant for a continuously moving target, though this use-case seems similar enough. I'm basically wondering if there's a PID-type algorithm I can use that will have no overshoot and I can cut off as soon as I've met my target weight. Do I just use a standard PID algorithm and tune to minimize overshoot?
AI: Do I just use a standard PID algorithm and tune to minimize overshoot?
I think no. If you set PWM you set (control) flow rate, not level. So a PID is not the right algorithm.
To reach a position (liquid level), normally two cascaded PIDs are used: one controls the position (liquid level), the other controls the speed (flow rate). Two cascaded PIDs work well to keep the level constant: even if the level drops because someone consumes the liquid, they would work together to gently refill.
If I understand well, you reduce PWM down to a value PWMmin, as the vessel gets full. Your problem is that you risk to set a PWM too low which, for some type of pump, it is not sufficient and the pump stops.
A PID algorithm would be correct to control the PWM given a flow rate. As you approach the target value (quantity of liquid) you reduce the flow rate, and the PID regulates the PWM consequently.
If you can read the flow rate, you have all you need. Perhaps you have not... but you could estimate the flow rate reading the liquid level at intervals (seconds? minutes?). May be your reading is fast and accurate enough, or maybe not, all depends on time, precision, and whether it is allowed that a pump stops for a while (because the PWM is too low to make it move), until the software notices that the liquid level stopped raising and raise PWM consequently.
So, I think there are two possible strategies:
Decrease PWM as you are doing now but, if a stop of the pump is detected, raise it again
-- or --
When decreasing PWM, monitor the flow rate in an effort to understand the correct value to decrease PWM in the next step. I suppose that, before reaching the critical PWMmin, you notice that the flow rate is decreasing too much in respect to decrease of PWM.
If none of the two methods are viable, then there is no other choice to read reliably the flow rate, and in this case the PID should not be too difficult to set up.
When you can reliably (more or less) control the flow rate, you can concentrate on the first problem (reach a liquid level).
What you are doing now is actually a sort of PID - no, better than a PID, because what you are doing now is an algorithm tailored to the problem. PID is a good and general algorithm, but its bigger defect is that it doesn't know anything about the system it is driving. All you can do is set three coefficients, while in many cases more "intelligence" can do better. To stay on this case: a PID can try to generate a correct PWM for the pump, but it does not natively know that there is a minimum value to respect: that is a thing that must be managed outside the PID (or the PID must be modified). But now, we are back to the first problem: if this minimum PWM is not a fixed value, the PID can well stop the pump for a while, until it recognize that there is no flow, and then the PID will increase PWM again. |
H: What is the value that captured into BYPASS register?
The IEEE1149.1 describes many cases of capturing some value in the CAPTURE_IR/CAPTURE_DR stages to the internal registers. For example, the instruction register shall be filled with "...XXXX01" (the exact value is defined by the INSTRUCTION_CAPTURE atribute of the BSDL file) value in the CAPTURE_DR stage. The REGISTER_ACCESS atribute is also may describe such value (for a custom register) by string:
"MY_REG[8](MY_INSTRUCTION CAPTURES 01010101)," &
But I can't find what is the value (0 or 1) that captured into the BYPASS register in the CAPTURE_DR stage?
AI: Bypass register capture value is defined as "0".
See IEEE-Std-1149.1-2001, chapter 10.1.1, rule b:
When the bypass register is selected for inclusion in the serial path between TDI and TDO by the current instruction, the shift-register stage shall be set to a logic zero on the rising edge of TCK after entry into the Capture-DR controller state. |
H: How to detect output change of hand-wave sensor in MCU (Nano clone)
I have 12V powered hand-wave sensor (U2 on the image) and some Chinese "Nano" clone powered from same source. On the nano I want to use some pin (P1 on the image) to read U2 output.
I do not know how to connect this, because the primary problem I see is that the output from U2 is 0V vs 12V and P1 is 5V.
The image is illustrative only, it is not complete schematic at all. I need to solve that "?". As you can see, I have tried some NPN transistor as switch (resistors are not shown). It worked when base was connected to 12V from source or to the ground.
BUT it does NOT work when using U2. I found out, that the U2 output has "common +" or whatever it is called. It means that when output is 1, the voltage of out- goes to 0 and when the output is 0, the voltage of "out-" goes to 12V. The voltage of "out+" stays same 12V. I took these measurements with reference GND.
This seems like trivial thing and yet I am lost, but I hope somebody can help. I am real amateur, made some "digital" things using 5V powered plain AVRs, but I am lost at this voltage conversions. Have some stuff like LM139 or NE555 in the drawer if that can help somehow.
AI: Just some ideas:
Use a voltage divider at Out- to get 12V down to 5V.
Use the internal pull up of the Nano and put a n-channel MOSFET (or the NPN transistor you currently have, but with base resistor!) directly at Out-.
Use the voltage between Out+/Out- to drive an optocoupler to pull the GPIO low (internal pullup must be enabled).
In any case, the signal would have to be inverted in software. |
H: What type of circuit/components are needed to make a phone charger that stops charging when the battery is full?
I'm building my own portable power station, using a 12v lead acid battery as a power source. Currently I've got 1 12v socket and 2 usb ports on it. The idea being to use it to power something like a coolbox with the 12v socket and allow phone charging from the USB sockets.
I'm using a LM2596 step down converter to bring 12v to 5v for the usb ports. My question is how do I modify my circuit to ensure that the phones aren't overcharged if left plugged in.
I've tried looking it up on Google but everything I've found so far only covers the issue theoretically I.e. explaining about the different stages a charger goes through such as constant current charging initially then switching constant voltage charging when the battery hits a certain voltage followed by either shutting off or a trickle charge when full rather than explaining how to do it using components.
I want to understand what it is that makes the charger stop charging when the batteries full and how I can build that into my circuit either with individual components or a part like the LM2956.
Below is an article I found that almost answers the question but only goes as far as saying that they've used an LM338 to achieve it.
https://maker.pro/custom/tutorial/how-to-make-a-fast-charger-for-your-phone
I don't understand what's happening on/in the LM338 that's making it work.
AI: My question is how do I modify my circuit to ensure that the phones aren't overcharged if left plugged in.
You don't have to.
With modern phones the charger is in the phone. The USB cable simply provides a power input for the phone, and circuitry inside the phone handles the charging of the battery from this power source.
Some USB power banks turn off if the load drops down below a set level, but this is to preserve the batteries in the powerbank from discharging through the voltage converter. |
H: Transistor for Signal Attenuation
I've been considering a circuit that could be used as an equalizer for an audio signal. Specifically, I'm planning on using a set of filters (a cross-over) to split the input into respective channels, and then a set of transistors operating below saturation to act as variable resistors for each channel. Right now this is just a rudimentary design I'm playing with. Specifically: I know it's not the best design... it's just for play right now.
My question is this:
Is it reasonable to use transistors to attenuate a signal like this? It'll be low power, but it seems like this will just be changing the impedance of each channel, which may have effects that I (being inexperienced in this area of electrical design) am not accounting for.
Any (constructive) feedback is welcome.
AI: It can be done, but transistors (FETs) are not particularly linear when used individually. This will give detectable distortion in your audio signal (unless you keep the amplitude v. low (mV), when noise will then dominate).
There are techniques using 2 FETs in parallel, and with a different VGS offset to make a variable resistor that is more linear over a non-zero range, but they do depend on FETs following the square-law characteristic which is not perfect. Looking at https://en.wikipedia.org/wiki/Voltage-controlled_resistor will give you some starting points. |
H: How can I generate this waveform?
I am trying to generate this waveform:
basically a ramped square wave with a maximum frequency of 1KHz.
My first thought was to use the DAC pin of a microcontroller, but are there any ICs that can do this?
AI: Here is a simple one chip, one display, function generator circuit that could be slightly modified (in software) to produce a signal nearly identical to what you need. In fact there is an example wave form that the author calls a "chainsaw" that is already very close to what you want. http://www.technoblogy.com/show?20W6
Later update w/PCB:
http://www.technoblogy.com/show?2FCL
If you require a bipolar waveform you could reference your output ground to 1/2 the supply voltage, for example +2.5v in this case. So with the generator circuit running with a +5v supply you could easily create an output (bipolar) waveform output of 2.5vpp.
If you don't think you would ever need to adjustment the signal then you might even leave out the display and adjustment components to save costs. |
H: How can an AC circuit have positive current and negative voltage?
While trying to fully understand capacitors, I have run into a concept I can't understand.
I understand that voltage lags behind current by 90 degrees when connected to an AC circuit, and I understand why it happens, but how can you have positive current and negative voltage for those 90 degrees? How can voltage flow one way and current flow the other? Maybe I'm thinking about it too hard.
AI: How can an AC Circuit Have Positive Current and Negative Voltage?
Current and voltage in a circuit are completely independent, and only depend on what sources/loads are connected together.
You don't need AC for it. You don't need capacitors. It works at DC. And since you can control it independently at DC, you can also do so at AC. You can have the voltage be a sine wave, and the current be a triangular wave. I'm not kidding. Independent means independent.
Below, on the left, is a circuit that does just that at DC: a voltage source connected to a current source. Both can be configured for whatever voltage and current value you want. You can have non-zero current with zero voltage, non-zero voltage with zero current, positive current with negative voltage, and vice versa.
simulate this circuit – Schematic created using CircuitLab
Voltmeter VM1 is optional. You can remove it. it's just a single circuit, with two active components: a voltage source and a current source. A single loop. We connect a current meter in series with it, and a voltmeter in parallel with the voltage source.
On the right above you can see what happens when you replace the current source with a 1 Ohm resistor. The current then flows the "normal" way. Or, rather, the way you consider "normal".
This may seem counterintuitive because it's easy to only think of resistors connected to ground as valid loads. Resistors are useful, but they are "just" voltage-controlled current sources, whose control equation is Ohm's law: $$I(t)={1 \over R}\cdot V(t),$$ where \$t\$ is time.
But there's a wide variety of control equations you can use for a current source, including one where the current is independent of voltage but changing, or even just constant. A capacitor also acts as a voltage-controlled current source, it just uses a different control equation - one that's a differential equation, specific to ideal capacitors: $$I(t)=C \cdot {{\rm d}V(t) \over {\rm d}t}.$$
Now you may say: hey, no fair, that's just some ideal elements in a simulator. No way you can build one, right? Ahem. You can. Not only that, you can make an OK one from about 10 bipolar or mosfet transistors, or two op-amps. And no, the op-amps don't need to be unobtainium or ideal. Just a basic LM358 or two LM741s will do.
simulate this circuit
But wait! There's more. You don't need anything as "weird" as "current sources" or "fancy" as op-amps. Two batteries and a resistor is all you need. And yeah, with a resistor you can still get negative current: you need to connect other end to something other than 0V:
simulate this circuit
It gets even fancier: you can build virtual capacitors without using any capacitors at all, just op-amps and inductors. Such circuits are called gyrators. They invert impedances, so a complex impedance of an inductor is made to "appear" to the connected circuit as a complex impedance of opposite sign: a capacitor.'
And it gets fancier than that still: you can implement any V-I relationship you want in code, using an ADC, a DAC, a voltage-controlled current source circuit, and a CPU. At low frequencies - say <100Hz, this works using some super-rudimentary devices like an Arduino Uno, its internal ADC, a PWM digital output pin to do the job of a DAC. For small currents (single mA) and voltages between 1-2V, you only need a capacitor to convert the PWM digital output into a variable bidirectional current, controlled by some equation implemented in code. Within those basic limits, you can make the thing act like a capacitor, an inductor, a tank circuit, a filter, and so on. |
H: Why can a capacitor connected to a battery only charge up until the voltage is equal to the battery?
I've tried searching for an explanation that lays out nicely why the charging of a capacitor stops when its voltage is equal to the supply voltage but I cannot find much. I've only seen one question, which is from Quora which phrases it in a way similar to what I am asking.
https://www.quora.com/Why-the-charging-of-a-capacitor-stops-when-the-voltage-across-the-plates-of-a-capacitor-is-equal-to-the-voltage-across-the-terminals-of-the-battery
But I am not really familiar with the analogy used which is a water pump and diaphragm. But the kind of answer I am looking for is why the moving of electrons stops based on the voltage of the battery. I mean I could settle with answers like "they are connected in parallel so their voltages are equal" or "the capacitor only charges based on what the voltage the battery can give" but I would appreciate a more nicely laid out explanation.
AI: Not sure why that example uses a water pump and diaphragm, seems overly complex to me. A simpler model would be having the battery as a big water tank and the capacitor as a smaller tank. Both infinitely high. They are connected at the bottom with a small tube that represents the wiring. The water level in each is equivalent to the voltage. If they are not equal, water will flow through the tube until they are. As the tube is small, this takes some time. That is resistance.
This goes both ways, because other than size, the tanks are equivalent. It's precisely the absence of something pump-like that makes it so that water cannot go from a lower level to a higher one. |
H: Level shifting of a 3 state pin
I want to design a simple battery charger using MCP73831/2. This chip has a 3-State state pin according to the table below:
I have a 3.3V MCU and I want to connect this pin to a GPIO. The MCU is not 5V tolerant and maximum votlage on any pin is given 3.6V per datasheet (esp32).
My question is, how can I safely connect this STAT pin to my MCU? How can I translate its HIGH state (which is I guess 5V) to 3.3 and its LOW state to 0V (logic low). Is a resistor divider enough for this? or will I need to use mosfet or transistors? Here is my circuit:
AI: Using a potential divider is fine.
Use a 10K upper resistor and 20K lower resistor, or higher values, rather than 1K and 2K. That avoids drawing current needlessly.
You can't detect the high-Z output this way, though, but your question implies that you don't want to. |
H: Will a 4.5V - 0 - 4.5V transformer work with a 9V battery?
I am concerned as to whether my circuit will work or not before I build it (I am concerned about the battery voltage.)
This is what I have but my transformer is 4.5V - 0 - 4.5V. Is there any way I can change this to work with a 4.5V - 0 - 4.5V transformer?
Also it looks wrong to have the positive side of the battery go into the 0v part of the transformer but it is probably correct.
AI: You would need the circuit of an astable multivibrator. There are two RC combinations determining high and low period time.
t1 = R1 * C1 * ln(2) ; t2 = R2 * C2 * ln(2) ; t1 = t2 = 10 ms for 50 Hz oscillation. The calculation of C1 and C2 is left as an exercise.
For a 9 V battery, a transformer with a split secondary winding with 18 V should be used.
simulate this circuit – Schematic created using CircuitLab |
H: Is a change-over-switch with a VFD dangerous?
We are using a variable frequency drive (VFD) with a compressor. We added a rotary cam changeover switch (600 V/50 Hz, picture is attached) in between the compressor and the motor. The purpose of adding this switch that if the VFD is not working, we can switch to direct grid mode.
The number 1 is for compressor running on VFD and number 2 is for compressor running directly on grid.
When we run the compressor through this switch on VFD, the VFD is damaged after some time (may be 10 days.)
What could damage the VFD with this setup?
AI: There are a number of possible situations:
Switching connecting a motor to a VFD while the VFD is operating could damage the VFD.
Switching a motor from a VFD to the grid while the motor is running could damage the motor.
Switching a motor from the grid to the VFD while the motor is running could damage the VFD.
Switching a motor to the VFD while the motor is coasting could damage the VFD.
There may be a few VFD models that could be damaged by disconnecting the motor while it is running.
Most VFDs have protections built in that will protect the VFD in most situations. Even with extensive protection built in, VFD manufacturers often recommend against 1, and 3. |
H: How to calculate the output resistance of the CE amplifier, which has only a coil in its collector?
In order to be able to match the impedance, it is necessary to know the output resistance of the amplifier.
In radio frequency amplifier circuits, there is usually a coil in the collector instead of a resistor.
Actually, I don't understand why they only use coils. However, if a resistor is used, the voltage gain can be increased. Because Av=-gm.(Rc//RL).
That's not the main problem for now.I don't know how to calculate the output resistance if only the coil is connected to the collector.
Let the amplifier circuit be as in the picture. It is necessary to calculate the output resistance to match the impedance. If there was a resistor in the collector, it would be easy to calculate the output resistance. But I don't know what to do when there is a coil.
I couldn't find any information about it on the internet. It just said somewhere that the output resistor can be found with Vce/Ic. I'm not sure if (this is true).In the circuit Vce=6V and Ic=13.4 mA . Vce/Ic=6/0.0134=461ohm . According to this calculation, the output resistance is 461 ohms. But when I connect a test source to the output of the amplifier and test it in the LTSpice program, the output resistance is 65 ohms at a frequency of 10 Mhz. Which of these calculations is correct? Maybe both are wrong. Help me. How do I calculate the output resistor?
AI: In order to be able to match the impedance ...
As you have all data about the transistor used,
why don't you "measure" it by the simulation to check it ?
Apply just the definition Zout = v(Vo1)(load=open circuit )/I(R8)(load=short circuit).
NB: note that the theoretical impedance output is not Vce/Ic, but Delta Vce/ Delta Ic which can be a "little" different.
Ok. You have it well done ... Just correct it ... it is not "65 Ohm", but "65*j Ohm" (reactance).
I found j*64 Ohm, very near the impedance of the 1 uH inductor.
For the DC output impedance, I found 349 Ohm.
Now we know "output impedance", we can use this, or this, the most interesting. |
H: Simple design for bidirectional current thought a coil
I have a coil made of a spiral PCB trace, with a measure inductance of about 20uH and resistance of about 10 ohms.
I want to be able to run, say, 200-300ma through the coil in either direction (the trace is thick enough to handle the current). Switching between directions is slow, say 10-20Hz.
I have an array of such coils (probably will use a shift register to reduce GPIO count).
I would prefer to have a simple design, with a single 5v or 12v supply.
What would be the best way to implement this? Specifically how can I easily and cheaply control the direction of current flow? MOSFETs? Buffers? In what configuration?
AI: Limited to a single supply voltage, the classic approach is called an H-bridge. Four saturating switch devices at the four ends of a capital-H, two pulling up and two pulling down, with the load in the cross-bar position. Of course, there are options.
If you can stand the driving voltage loss, two power opamps can do this, a saturating variation of an audio power amplifier topology called BTL - Bridge-Tied Load.
If the current were lower, two 555's could act as half-bridges. Intersil has a long line of bridge driver chips for things like motors and switching power supplies. Some companies have half-bridge and full-bridge drivers with big-enough MOSFETs built in.
https://images.search.yahoo.com/search/images;_ylt=AwrE19fdrVFiZxUAEEZXNyoA;_ylu=Y29sbwNiZjEEcG9zAzEEdnRpZAMEc2VjA3BpdnM-?p=H-bridge+driver+circuit&fr2=piv-web |
H: LM10 schematic transistor symbols
I'm trying to figure out what certain elements of the transistors from the schematic in the LM10 datasheet represent and I hope someone can help. In the attached image I have several questions:
In Q1, what is the cross bit that looks like a secondary collector?
In Q24 and Q50, what do the numbers represent on the collectors (0.1, 0.2, 1)?
In Q50 there seems to be 3 collector lines, is that basically the same as the two in Q24 but with an additional collector?
Is R49 just a trimmed resistor representing a more exact tolerance?
What is represented in the difference of the base of Q73 as opposed to Q74?
I've only seen these symbols on datasheet schematics so I apologize if these questions seem newbie-ish.
AI: These are various forms of 'split collectors' on PNPs. In an (very) old technology as was used for LM10, PNP transistors are lateral devices -- collector current flows laterally (actually radially) from the emitter
Thus the PNP is constructed with the emitter as a p-type disk in the center; the collector is a p-type ring surrounding it, and both of these regions are in the n-type base material. The width of the base (which determines the performance of the device) is defined by the gap between the emitter disk and the collector ring.
Now, the collector ring does not have to be a complete ring -- it can be broken into a number of sections -- all of different fractions of the total ring. https://picture.iczhiku.com/resource/eetop/wyKRPkizzhzzEBmX.pdf shows some pictures of this.
These collectors act as PNPs in parallel -- which share a base and emitter, and whose effective (electrical) size is proportional to the fraction of the ring each collector is. Thus in the 0.8; 0.2 Q24 above, 80 % of the emitter current will flow in the larger collector, 20 % in the other. The advantage if this is that the overall structure is smaller than 2 separate PNPs.
Q1 is a related structure called a 'ringed PNP'. Basically there is another collector ring outside the first one. When normally biased, the inner collector will collect all emitter current, and so the outer collector will have no current. However, when the inner collector saturates (e.g. VCE becomes very small), it will re-inject carriers (i.e. act as an emitter), and the outer collector will conduct. Thus it detects if the 'main' PNP is saturated and provides a current if it is. You can think of this as two PNPs in series (emitter of the 2nd one == collector of the inner one.
Usually the sum of documented collector sizes adds to 1 in a smallest PNP; Q50 above is probably slightly larger than minimum, thus they sum to slightly more than 1.
Q73 probably represents an NPN with a thicker or different base diffusion. While this may have lower beta, it probably also has much higher emitter-base breakdown V which might have been useful in the circuit. |
H: How to set idle PWM output as LOW in timer settings in STM32 board?
I start a PWM output(using Timer1) in a STM32 Nucleo board using HAL as follows:
HAL_TIM_PWM_Start_IT(&htim1, TIM_CHANNEL_2);
And I stop the PWM at an interrupt using HAL as below:
HAL_TIM_PWM_Stop_IT(&htim1, TIM_CHANNEL_2);
But when the PWM is not pulsating it outputs 1.8V as indicated in red in above snapshot. What parameter would set it such that the PWM would output zero instead of 1.8V?
Here is Timer 1 settings:
static void MX_TIM1_Init(void)
{
/* USER CODE BEGIN TIM1_Init 0 */
/* USER CODE END TIM1_Init 0 */
TIM_MasterConfigTypeDef sMasterConfig = {0};
TIM_OC_InitTypeDef sConfigOC = {0};
TIM_BreakDeadTimeConfigTypeDef sBreakDeadTimeConfig = {0};
/* USER CODE BEGIN TIM1_Init 1 */
/* USER CODE END TIM1_Init 1 */
htim1.Instance = TIM1;
htim1.Init.Prescaler = 40-1;
htim1.Init.CounterMode = TIM_COUNTERMODE_UP;
htim1.Init.Period = PERIOD;
htim1.Init.ClockDivision = TIM_CLOCKDIVISION_DIV1;
htim1.Init.RepetitionCounter = 0;
htim1.Init.AutoReloadPreload = TIM_AUTORELOAD_PRELOAD_DISABLE;
if (HAL_TIM_PWM_Init(&htim1) != HAL_OK)
{
Error_Handler();
}
sMasterConfig.MasterOutputTrigger = TIM_TRGO_RESET;
sMasterConfig.MasterOutputTrigger2 = TIM_TRGO2_RESET;
sMasterConfig.MasterSlaveMode = TIM_MASTERSLAVEMODE_DISABLE;
if (HAL_TIMEx_MasterConfigSynchronization(&htim1, &sMasterConfig) != HAL_OK)
{
Error_Handler();
}
sConfigOC.OCMode = TIM_OCMODE_PWM1;
sConfigOC.Pulse = PERIOD/2;
sConfigOC.OCPolarity = TIM_OCPOLARITY_HIGH;
sConfigOC.OCNPolarity = TIM_OCNPOLARITY_HIGH;
sConfigOC.OCFastMode = TIM_OCFAST_DISABLE;
sConfigOC.OCIdleState = TIM_OCIDLESTATE_RESET;
sConfigOC.OCNIdleState = TIM_OCNIDLESTATE_RESET;
if (HAL_TIM_PWM_ConfigChannel(&htim1, &sConfigOC, TIM_CHANNEL_2) != HAL_OK)
{
Error_Handler();
}
sBreakDeadTimeConfig.OffStateRunMode = TIM_OSSR_DISABLE;
sBreakDeadTimeConfig.OffStateIDLEMode = TIM_OSSI_DISABLE;
sBreakDeadTimeConfig.LockLevel = TIM_LOCKLEVEL_OFF;
sBreakDeadTimeConfig.DeadTime = 0;
sBreakDeadTimeConfig.BreakState = TIM_BREAK_DISABLE;
sBreakDeadTimeConfig.BreakPolarity = TIM_BREAKPOLARITY_HIGH;
sBreakDeadTimeConfig.BreakFilter = 0;
sBreakDeadTimeConfig.Break2State = TIM_BREAK2_DISABLE;
sBreakDeadTimeConfig.Break2Polarity = TIM_BREAK2POLARITY_HIGH;
sBreakDeadTimeConfig.Break2Filter = 0;
sBreakDeadTimeConfig.AutomaticOutput = TIM_AUTOMATICOUTPUT_DISABLE;
if (HAL_TIMEx_ConfigBreakDeadTime(&htim1, &sBreakDeadTimeConfig) != HAL_OK)
{
Error_Handler();
}
/* USER CODE BEGIN TIM1_Init 2 */
/* USER CODE END TIM1_Init 2 */
HAL_TIM_MspPostInit(&htim1);
}
AI: When GPIO is configured, but the timer is not running yet, the pin may be in arbitrary state. For example, if you look at my PWM2 implementation (nothing fancy, just a regular PWM2 with custom parameters), you can see before I actually activate the timers (immediately after reset), the pins are in different states (while I was holding reset), there can be some undefined state in your case as well (although 1.5V is a little odd; not sure about that):
Picture from my Timer demo sketch on GitHub
Since you want to have explicit pin state at all times, you should activate GPIO pull-down on a pin in GPIO configuration, it will fix the timer pin state to low when the timer is not actively driving it. |
H: Using H-bridge gate drivers to drive MOSFETs in non PWM applications
I am using IR2110 to drive a highside MOSFET which I intend to use to control the charging of a lithium ion battery (disconnecting it from the charger once full). I am not using any PWM control because the battery will be constantly charging. Just making the HIN pin of the IC high when I want to charge the battery and low when I want to disconnect the charger. Can H-bridge ICs work in this type of application where pwm is not involved or should I consider an alternative bootstrapping circuit? The diagram below shows my circuit. In my simulation, though the HIN voltage is constant, the output is oscillating. Please help me fix this.
AI: This won't work in a DC application like yours because the bootstrap capacitor will discharge over time (microseconds), thus the high-side FETs will not remain on.
You could build a charge pump to maintain the bootstrap voltage, but it would be just as easy to use that to drive the FETs since you probably don't need the sub-us speed the PWM driver can operate at. |
H: Why is a PCI card treated as two loads on the PCI bus?
In PCI bus introduction materials, especially when talking about the load capacity of the PCI bus, it's often stated that a PCI card inserted into the PCI slot is actually acting as two loads on the bus, one is the card itself, the other is the slot in which the card is plugged in. Why is that?
For example, in page 16:
of the book:
AI: The parasitic capacitance of the PCI standard edge connector system is similar to the input capacitance of the ASIC/FPGA that implements the PCI function, and its associated traces.
The PCI slot connector system trades off some performance for low cost and acceptable price-performance ratio.
If you wanted to use non-standard, higher-performance connector systems, you could probably reduce the slot parasitics to 0.3-0.5 of a PCI load, perhaps better if a redriver was used at each slot.
On a parallel PCI bus, you can connect about 10 ASICs directly on the motherboard, with good wiggle room left, whereas 5-6 devices are a maximum if they are on plug-in cards. This informs system design: if you need more than 5-6 devices on the bus, you need to move some of them from the slots to the motherboard, or you need to use more expensive connector systems, or you need to add another PCI bridge. |
H: ISE Design Suite simulation problem
I am new in Verilog language. I am trying to understand the basics. There is this question where the input is a 6-bit number named IN and the output is a 1-bit number named OUT. When IN < 29, OUT is one. Otherwise OUT is zero. I have already written the code, and I am sure it is correct; however the simulation does not show anything. I have been trying to figure out what's wrong for a while now. I would appreciate any help I can get with this. My testbench goes until IN = 6'd29;.
`timescale 1ns / 1ps
module lab1(input [5:0]IN, output reg OUT);
always @ (IN)
begin
if (IN < 6'd29)
OUT = 1'b1;
else
OUT = 1'b0;
end
//assign OUT = ~IN[5] & ~IN[4] | ~IN[5] & ~IN[3] | ~IN[5] & ~IN[2] | ~IN[5] & ~IN[1];
//assign OUT = (IN < 6'd25) ? 1'b1 : 1'b0;
endmodule
`timescale 1ns / 1ps
module lab1_tb;
// Inputs
reg IN;
// Outputs
wire OUT;
// Instantiate the Unit Under Test (UUT)
lab1 uut (
.IN(IN),
.OUT(OUT)
);
initial begin
// Initialize Inputs
IN = 6'd0;
#10;
IN = 6'd1;
#10;
IN = 6'd2;
#10;
IN = 6'd3;
#10;
IN = 6'd4;
#10;
IN = 6'd5;
#10;
IN = 6'd6;
#10;
IN = 6'd7;
#10;
IN = 6'd8;
#10;
IN = 6'd9;
#10;
IN = 6'd10;
#10;
IN = 6'd11;
#10;
IN = 6'd12;
#10;
IN = 6'd13;
#10;
IN = 6'd14;
#10;
IN = 6'd15;
#10;
IN = 6'd16;
#10;
IN = 6'd17;
#10;
IN = 6'd18;
#10;
IN = 6'd19;
#10;
IN = 6'd20;
#10;
IN = 6'd21;
#10;
IN = 6'd22;
#10;
IN = 6'd23;
#10;
IN = 6'd24;
#10;
IN = 6'd25;
#10;
IN = 6'd26;
#10;
IN = 6'd27;
#10;
IN = 6'd28;
#10;
IN = 6'd29;
#10;
end
endmodule
AI: Many simulators generate a warning message for your code. For example, on EDA playground, I see:
.IN(IN),
|
xmelab: *W,CUVMPW (./testbench.sv,14|13): port sizes differ in port connection(1/6) for the instance(lab1_tb) .
The solution is to change:
reg IN;
to:
reg [5:0] IN;
Perhaps your simulator does generate a warning, but it wasn't obvious to you where you should look for it. Note that your code is legal Verilog syntax, which is why you do not get an error.
Here is a more compact way of writing your code, using a for loop for the input:
`timescale 1ns / 1ps
module lab1 (input [5:0] IN, output OUT);
assign OUT = (IN < 6'd25);
endmodule
module lab1_tb;
// Inputs
reg [5:0] IN;
// Outputs
wire OUT;
// Instantiate the Unit Under Test (UUT)
lab1 uut (
.IN(IN),
.OUT(OUT)
);
integer i;
initial begin
// Initialize Inputs
for (i=0; i<30; i=i+1) begin
IN = i;
#10;
end
end
endmodule |
H: Accuracy Class and Full Scale
I am using this formula to calculate the class of an instrument:
\$\text{Accuracy Class} = \frac{\max{\text{(Absolute Error in Range of Measurement)}}}{\text{Full Scale Value}}\$
The question is, FSV is the full scale OUTPUT or INPUT value?
In example, I have a preasure sensor with a full scale input of 100 kPa, the full scale output is 0.999 V and the max absolute error is 0.016261156.
The class of the instrument is 0.016261156 (using FSV=100kPa) or is 1.627743331 (using FSV=0.999)?
(the second one makes more sense to me, because it is an usual value and units of the quotient values cancels it self).
AI: If the FS output is 0.999V in and the maximum error in the output is 0.016V then the accuracy class is 1.6%. |
H: Arduino Watchdog Power Cycle Circuit
I'm looking at building a watchdog that will power cycle my arduino (weather station) when it is not responding (my arduino gets a CMOS lockup and doesn't respond to reset).
I've read about designs with timers but haven't seen yet a simple design that would work with x hours at least. So was thinking of using a second arduino (to be replaced with ATTiny) and leverage interrupts + sleep mode so that if no ping is received from my main arduino, it will power cycle it. Was thinking of a circuit with NPN transistor.
I suspect alot can be optimized there... any suggestions?
AI: I do not understand what the diode is for? Not knowing what is connected to your Weather Pro Mini, opening the ground could cause some strange things to happen. Get rid of the diode and transistor and connect the grounds together. Put a P-Channel (logic level) in the VCC feed to the to the Weather micro. The Vgs must be less then 2.5V Source to VCC feed, Drain to VCC weather micro then connect the gate to D3, when the port is low the weather will be on. Use this to also power the peripherals, this guarantees they will also be reset. |
H: Symbol for potentially dirty ground
I have a net which, although should be ground, could potentially be dirty and contain all sorts of minor noise. What is the correct way for me to denote this in my schematic?
AI: Surprisingly IEC symbols come close, at least for a "Clean" ground, see No. 5018:
For a "Dirty Ground" perhaps something like this would be close enough: |
H: Using Current Branch Method to Solve this Circuit
I have the following circuit.
It is given that:
\$R_1=10\Omega\$
\$R_2=5\Omega\$
\$R_3=3\Omega\$
\$R_4=8\Omega\$
\$R_5=12\Omega\$
\$B_1=10V\$
\$B_2=5V\$
So using KCL on the two nodes gives;
\$I_1-I_2-I_3=0\$
\$I_3+I_5-I_4=0 \$
Applying the KVL is where I'm having trouble. The left loop and right loop are pretty straight foward.
\$ 10-10I_1-5I_2=0\$
\$5-12I_5-8I_4=0\$
However, for the middle loop I got;
\$ -5I_2-3I_3-8I_4=0\$
But it should be \$5I_2-3I_3-8I_4=0\$ (or \$-5I_2+3I_3+8I_4=0\$ depending on chosen convention). This gave the correct values when I checked using software. So whats wrong with my first equation? Looking at the middle loop \$I_2,I_3,I_4\$ are all travelling from + to - so shouldn't all the voltages across the resistors being negative?
AI: Problem
You are getting the wrong answer because you have not accounted for the fact that I_2 is perceived in the negative direction from the current loop in the central cell. I_3 and I_4 have the same sign for a current which flows in the clockwise direction, but the sign for I_2 flows in the counter-clockwise direction for the middle cell.
Fixing this sign yields the correct equation.
Alternative analysis
I find it very helpful to analyze KVL as a full loop, as below. I have defined three loop currents: $$I_\alpha, I_\beta, I_\gamma$$
I wrote the loops as a function of those currents, and then I provided the transformation for the currents $$ I_1, I_2, I_3, I_4, I_5$$:
In my opinion, this is a much more reliable way to write the eqns, as it does not require you to remember to analyze each current leg for the appropriate sign. |
H: Protecting small DC motor (power door lock actuator) from burning out due to stall
This question of small DC motor burnout due to heat/stall has been asked in a few different ways/scenarios already, so I think I have a basic understanding of what the issue is -- I'm interested in understanding why the below is a problem in my case but seemingly not in another. Appreciate any insight anyone can share!
I am trying to use a 12VDC power door lock actuator in a specialized application, where it's connected to an on/off momentary push button switch that causes the actuator to push a lever to hold a door open when powered. Releasing the button to cut the power causes the door, lever, and actuator to retract simply because of gravity. My power source is an AC to 12VDC power supply rated at 8A. I want to be able to hold the button down to keep the door open for a "reasonable" amount of time -- perhaps 30s to 5mins, maybe.
My issue is that holding the button for a short time (approx 10-30s) seems to keep burning out the DC motor in the actuator. I understand it's likely because the actuator shaft causes the motor to stall once it reaches its max extension, which in turn causes the current draw and motor's heat to go up, which can burn out the motor.
So why don't these motors burn out when being used in their intended use case -- car door locks? These actuators have limited travel and the motors stall by design when they extend or contract fully. I know that in most cases, electronic door locks are probably just quick momentary ON states so the stall is very short...but as children in my car have proven, you can hold the lock/unlock button in a car and nothing burns out inside the door frame when you do. Why is that?
And I guess most importantly, is there any way I can use these actuators to hold open a door like I want without them burning out by adding something to the circuit or perhaps changing my power supply?
Thank you, thank you!
Equipment details:
Motor is this type:
https://www.parts-express.com/High-Power-Door-Lock-Actuator-2-Wire-330-010
I've burnt through a few different manufacturers of this motor, but they're all more or less the same exact thing.
Power supply is this:
https://www.amazon.com/gp/product/B07G5BQGYD/
AI: I'm interested in understanding why the below is a problem in my case but seemingly not in another.
Look into how those things work in a car. Pull the door trim off, attach an oscilloscope, and activate the door locks via remote, and see how the car uses those things. You must drive them the same way.
So why don't these motors burn out when being used in their intended use case -- car door locks?
Because these motors must stay off. Being ON is an exceptional situation for them.
If you can count to three while the motor is turned on, it's on too long already!
Protecting small DC motor (power door lock actuator) from burning out due to stall
The things are meant to stall at the end of their motion. That's not the problem. The problem is that they cannot stay on for any length of time after that. You cannot just connect them to a button directly and expect them to survive. They were not designed to be used that way, and they won't take it.
In a car, the lock actuator is turned on for a fraction of a second to change the state of the lock. The driver circuit senses back-EMF to detect successful lock/unlock action. The body control module or the lock module can then reverse the operation if any of the motions fail. At least my Volvo does that: if you hold the door lock button "stuck in place" while locking the car, the doors will lock (except the one you're messing with), then they'll unlock. They'll only stay locked if all doors have successfully locked. Other cars also detect the stalls, but it's up to the vendor to use that information for something.
It seems that your design is just too simplistic. Those motors are only meant to be operated when changing the state of the lock: engaging it, or disengaging it. Intermittent operation for less than a second is what they are designed for.
To keep the motor happy, the system should be mechanically self-stopping: when the motor is off, nothing should be moving by itself. You then, at simplest, have two buttons: open and close. Each button activates a circuit that delivers a fixed duration pulse to the motor, of the correct polarity.
If you want to use a single button, then you'll need a circuit that detects the opening and closing of the button and activates two pulse generators as-if there were two buttons. |
H: BJT Is Saturation current temperature dependence singularity
So, I am currently looking at the ways transistors change their parameters with temperature according to SPICE for a thorough investigation of synthesizer VCO exponential converter. Particularly, the Is saturation according to this SPICE description varies with temperature like this:
What troubles me is the 1/(T1-T0) term in the exponent. Say, the saturation current is measured at 25 degrees celsius, then, when we try to determine the Is at that temperature we get Exp[1/0], which is an obvious singularity. At temperatures slightly lower than 25 degrees this term is zero, at temperatures slightly higher it tends to infinity. What am I understanding wrong here or is the formula just wrong? If so, what's the right one?
AI: The documentation simply got it wrong. It's old and probably nobody cared enough to fix it, assuming they spotted the mistake. The units don't work out, the possibility of a 0 in the denominator shouldn't be there either. There's no singularity.
According to Berkley SPICE 3f5 source code, the equation should read:
$$
\begin{aligned}
V_T(T) &= T \frac{k}{q(1{\rm\,V})} \\
I_S(T) &= I_S(T_0)
\left[\frac{T}{T_0}\right]^3
\exp \left[
\frac{E_g}{V_T(T)} \left(\frac{T}{T_0} - 1\right)
\right].
\end{aligned}
$$
The code involved, edited for readability, taken from src/lib/dev/bjt/bjttemp.c, is:
vt = here->BJTtemp * CONSTKoverQ;
ratlog = log(here->BJTtemp/model->BJTtnom);
ratio1 = here->BJTtemp/model->BJTtnom -1;
factlog = ratio1 * model->BJTenergyGap/vt + model->BJTtempExpIS*ratlog;
factor = exp(factlog);
The 1V factor isn't written explicitly in the source code, but is required to get the units correct. Numerically it makes no difference. Multiplying by 1 is a no-op, not even worth putting in the source code. Back then, compilers might have had trouble optimizing such a multiplication-by-one out.
Reorganizing things a bit:
$$
\begin{aligned}
I_S(T) &= I_S(T_0)
\left[\frac{T}{T_0}\right]^3
\exp \left[
\frac{E_g q (1{\rm\,V})}{k T} \left(\frac{T}{T_0} - 1\right)
\right] \\
&= I_S(T_0)
\left[\frac{T}{T_0}\right]^3
\exp \left[
\frac{E_g q (1{\rm\,V})}{k} \left(\frac{1}{T_0} - \frac{1}{T}\right)
\right] \\
\end{aligned}
$$
Where, from src/include/const.h and src/lib/dev/bjt/bjtsetup.c we get:
$$
\begin{aligned}
q &= 1.6021918\cdot{10}^{-19}{\rm\,C} \\
k &= 1.3806226\cdot{10}^{-23}{\rm\,J\cdot K^{-1}} \\
E_g &= 1.11{\rm\,eV} \\
\end{aligned}
$$
This explains the "mysterious" \$E_g\cdot q(1{\rm\,V})\$ product in the formula: it converts \$E_g\$ customarily given in electron-Volts to Joules, to match the SI units of the Boltzmann constant value used.
In SI units, the formula becomes, simply:
$$
\begin{aligned}
I_S(T) &= I_S(T_0)
\left[\frac{T}{T_0}\right]^3
\exp \left[
\frac{E_g}{k} \left(\frac{1}{T_0} - \frac{1}{T}\right)
\right], \\
\end{aligned}
$$
just as jonk had commented. |
H: Can this resistor be replaced by a wire or fuse?
My power supply got broken. Upon further inspection, I found out that the high power resistor (15 ohm, 10 W) was broken.
Unfortunately, I can't purchase such a resistor (with the same specifications, not to mention the same brand) and I don't have the schematics and am not sure what exactly the resistor is for (what devices does it limit the current for?) and I am wondering how dangerous would it be to:
Replace the resistor with a wire. That means basically no resistance and much more current flow in the close-by components. Even though I don't see components that could be affected by overcurrent, it's still something I'd do as a last resort.
Replace the resistor with a fuse. We can calculate the appropriate current (230 V at 15 ~ 16 A) and have it soldered. This is an even better option since in the worst case it will blow and I will save other components. The last thing I want to do is to blow the chip when now only one resistor is blown.
How appropriate do these options seem? Is there another way to solve the problem?
AI: The brand is not relevant. And you can most definitely buy a resistor that would fit, i.e., with same resistance and approximate power rating.
You're assuming that the rest of the supply still works. That may or may not be true...
it's really hard to find 15 ohm 10 W resistor in my area right now
Harder than reverse engineering the supply to understand why the 15 ohm resistor was there? Nah, I don't buy that, unless your time is free... You can in fact do both: order a resistor from China, for cheap, but with slow delivery. In the meantime, reverse-engineer the supply at least around the resistor. Then you'll figure out what the resistor is for, and that in most likelihood it was needed :)
I've already tested the rest by applying fuse for few seconds, PCB provides expected outputs, so not 100% sure but 99% everything else works. What's your opinion on suggested workarounds?
The designers used a resistor for a reason. You'd have to reverse engineer the circuit, make sure you understand their reason, and then satisfy yourself that the workarounds won't break something. Engineers don't put such elements in the circuit for no reason. Do not use a wire or a fuse. |
H: Inputs for flipflops sequential circruits
Not very sure if these inputs are correct before i draw my truth table.
Too many lines and i am confused.
JA = QB = KA = B
KA = B
DB = D'
TC = 0
AI: Try using Q(t) and Q(t+1) while writing your state equations to avoid confusion.
Also just use QA and QB(and their complements if necessary) instead of using "A" and "B" as outputs.
Hope this helps you:
JA(t) = KA(t) = QB(t)
QA(t+1) = (QA(t)' AND JA(t)) + (QA(t) AND KA(t)')
DB(t) = QB(t)'
QB(t+1) = DB(t)
TC(t) = 0
QC(t+1) = QC(t) |
H: Calculate the next state for flip flop sequential circuit
Based on my previous question, i managed to draw out the truth table
Inputs for flipflops sequential circruits
|------|------|------|----|----|----|----|---------|---------|---------|
| QA | QB | QC | JA | KA | DB | TC | QA(t+1) | QB(t+1) | QC(t+1) |
|------|------|------|----|----|----|----|---------|---------|---------|
| 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 |
| 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 |
| 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 0 |
| 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 |
| 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
| 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 |
Start from state 7 (111) i need to have the next two states. But i am not sure if my QA(t+1) and QB(t+1) and QC(t+1) are correct. Trying to learn it by myself on youtube but it is quite hard..
Referenced to youtube video: https://www.youtube.com/watch?v=6jteVyUcAQU
Updated based on answer:
|------|------|------|----|----|----|----|---------|---------|---------|
| QA | QB | QC | JA | KA | DB | TC | QA(t+1) | QB(t+1) | QC(t+1) |
|------|------|------|----|----|----|----|---------|---------|---------|
| 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 |
| 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 |
| 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 |
| 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 1 |
| 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 |
| 1 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 1 |
| 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
| 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 |
AI: I'm afraid you have made a mistake in filling the JA and KA columns. You need only to copy the QB(t) column for JA and KA as both JA and KA are only connected to QB(t).
Your QC(t+1) and QB(t+1) columns are correct but you need to apply the modification i stated above to your JA and KA in order to get the correct answer for QA(t+1).
If you replace JA and KA in the equation for QA(t+1), you will get:
QA(t+1) = (QA(t)' AND QB(t)) + (QA(t) AND QB(t)') = QA(t) XOR QB(t)
So the fastest way to compute QA(t+1) is that you would toggle QA(t) whenever QB(t)==1 and leave QA(t) unchanged whenever QB(t)==0. |
H: Temperature coefficient of a resistor
Is it correct to say that the current through a resistor is PTAT(proportional to absolute temperature) if it increase when the temperature increases?
AI: I would say no.
The effects of temperature coefficient can be expressed as
$$ R= R_0 \left[1 + \alpha\left(T_0-T_{\text{ref}}\right) \right]$$
So, only the change in resistance (not the resistance) will be proportional to the temperature difference, and that holds best for small temp changes about a working point.
To be even more specific,
$$ R=\frac{\rho \ell}{A}$$
for objects with constant cross sectional area, where \$\rho\$ is resistivity, \$\ell\$ is length, and \$A\$ is cross sectional area.
Now, temperature will change the dimensions, and it will also change \$\rho\$. From https://physics.info/electric-resistance/
The general rule is resistivity increases with increasing temperature
in conductors and decreases with increasing temperature in insulators.
Unfortunately there is no simple mathematical function to describe
these relationships.
Thus, we tend to use the linear approximation for small temp changes, but work around a reference temp, as in the first equation presented. |
H: 4-bit synchronous counter IC: Do I need to pre-synchronize the count enables? (Metastability?)
Using a 74LVC163 synchronous 4-bit binary counter and would like to predicate the counting on an asynchronous signal. If I apply this signal directly to one of the enables (e.g. CEP), am I assured that the counter will either count or not count when clocked? Or might it get into a metastable funk if clocked when the enable is neither high nor low? (Possibly resulting in outputs changing long after a clock, for example.)
If metastability is a concern in this scenario, I can synchronize the enable signal through a couple of flip-flops before it reaches the counter. Not a HUGE deal. Just wondering if that's truly necessary for reliable operation. If it isn't, I'd love to save an IC.
AI: Yes, it is necessary. All control inputs to a '163 have setup and hold requirements with respect to the clock input.
And the issue is not just metastability. There are four individual FFs inside the chip, and if the setup/hold requirements are not met, they might interpret the control inputs differently (because of varying internal path delays), leading to unexpected state transitions. |
H: How should I power a 3V toy motor?
I'm helping my brother (8) build a small toy car powered by a tiny 3V motor that we have at home. Most tutorials we've found on the internet suggest using a 9V battery, but never specify the voltage of motor they're using.
As I only have basic and theoretical knowledge in electronics, I'm assuming I should power the motor with equal voltage (3V). Could I use a couple of AAA batteries in series (which, frankly, I have no idea how to do except maybe use electrical tape to hold them together) to do power the motor? Would it be enough? Should we use the 9V battery instead?
I'm just... trying to make sure we don't hurt ourselves or burn the house down.
AI: Yes, supply approximately 3 V. (If you were to use higher voltage, you might need additional current limitation to avoid burning out the motor.)
You can connect AAA in series to add up the voltages, for example using a battery holder like https://www.sparkfun.com/products/14219 with a built-in switch:
You can also move up to AA for longer running time. There are many variants of battery holders available.
You will need basic soldering skills and it doesn't hurt to buy a cheap multimeter. I'd say you're running minimal risks with so low voltages. A 9 V battery will sting if you put your tongue to the terminals but that's pretty much it. Oh, and most people burn themselves on a solder iron once before learning to show proper care. You might be able to wrap the stripped wire ends on the motor terminals to avoid the soldering.
Have fun! :-) |
H: Will using magnetic base test indicators with breadboards cause electrical interference?
The situation leading to the question
In the Quick Tips #8 // Precision Helping Hands YouTube Video, a "helping hands" with various names such as "Magnetic Base Adjustable Metal Test Indicator Holder" is used with oscilloscope probes on a breadboard circuit, similar to this:
I am planning on using a separate one of those holders for each oscilloscope probe with my own breadboard (a total of 4 probes for the 4-channel oscilloscope). But, for convenience of quickly rearranging these holders all around the breadboard, I am considering mounting both the breadboard and the holders with their magnetic bases all on top of some plate steel (needs to ferromagnetic for the magnetic bases to clamp down onto, so cannot be aluminum) from the local big box hardware store. I reason that the steel plate needs to be just large enough to move the mag bases around for clamping down, and just thick enough to avoid the "lever arms" of the base bending up the steel (can't use the 22-gauge el cheapo stuff from the store for this; gotta have something like 1/8" thick steel I believe).
The question
Will this setup shown above cause electromagnetic interference (or actual improvements to isolation against such interference), on the circuit under test, or surrounding equipment such as cables (I doubt the cables will be affected if they have proper shielding), leading to distortion or other artifacts at the oscilloscope?
I'm considering the following possible contributions:
Metal in the arms (probably mostly aluminum but not sure).
Metal in the magnetic bases.
Magnetic fields emanating from the magnetic bases themselves.
Metal in the plate steel.
Extra credit (can we do that in SE??) goes to anyone who can identify how the dimensions (length, width, and thickness) of the plate steel I use as the base for this setup affects the readings.
Oscilloscope is one of the Rigol 4-channel scopes. The scope is not owned by me, so if the specific make/model of scope is relevant, add a comment and I'll update this question with that detail. All scope probes are set to 10x setting.
The circuit under test is a op-amp based circuit similar to the one given by Pulsing Led for Competition - Elektor LABS and thus is not at all a high-frequency circuit. So, my naive bias here is "No, it ain't gonna matter, so go for it!"
AI: A current can be induced in a wire by a time-varying magnetic field. If the field itself is not time-varying you can also induce a current by moving a wire with respect to the magnetic field.
In your case there appears to be no movement so no current would be induced. Furthermore, if you place the magnets onto a steel base most of the magnetic field will be contained within the gap, if any, between the magnet and the plate so you would need to have wires very close to the magnets to see any effect at all.
The circuit you want to build does not appear to have any components that are themselves influenced by a constant magnetic field, such as Hall effect sensors. (I couldn't see the schematic without registering and I'm not going to do that.)
So, don't worry. Have fun. |
H: STM32F1/F4: Risks of driving a LED from PC13
In the STM32F411xC/E advanced Arm®-based 32-bit MCUs reference manual on page 71/844, I can read:
Due to the fact that the switch only sinks a limited amount of current (3 mA), the use of
PC13 to PC15 GPIOs in output mode is restricted: the speed has to be limited to 2 MHz with a maximum load of 30 pF and these I/Os must not be used as a current source (e.g. to drive
an LED).
You can read the same note in the STM32F103's reference manual.
What can go wrong or what is the risk if I connect a LED to PC13?
It seems "famous" prototyping boards like the Blue Pill and others I have found do have a LED connected to PC13. Is that wrong? Should I try not to use that pin in those boards or is it safe as long as the LED current is less than 3 mA?
AI: The datasheet is very strict about this, the pin must not source current.
And Blue Pill is not wrong, the pin sinks current to turn LED on.
Sourcing and sinking in this case just mean direction of current relative to something. IO pin that sources current means current flows out of MCU (it comes from positive supply), and IO pin that sinks current means current flows in to MCU (it goes to ground).
Inside the MCU there is an area called Backup Domain that can be powered via the VBAT battery supply pin even when main supply is turned off, so that real-time clock can keep time and have some SRAM that retains contents during main power loss. Also the IO pins PC13/PC14/PC15 that are related to this Backup Domain must be powered from Backup Domain. There is a switch or supply voltage multiplexer that selects VBAT supply to backup domain when there is no VCC supply available. This supply multiplexer or switch can pass, or in datasheet terms, sink the few microamps that normally passes through it, but can't pass much current or it might break or the voltage drops too much for the backup domain to work properly.
It does not mean there is a constant current sink anywhere. It just means the internal power switch can handle up to 3mA safely in all situations, and backup domain and the power switch itself will stay operational when these three special weak IO pins are used within the limits given.
So, as the switch can provide limited amount of current to Backup Domain, that is why the datasheet says the three IO pins must not be used as a current source. It just means there can be no load that draws current out of the pin. It is acceptable to drive e.g. few CMOS gate inpus, as they draw no DC current and the load is mainly capacitive. Since fast transitions to drive capacitive loads need larger currents than slow transitions, that is why the datasheet says to limit the output pin drive strength to the weakest 2MHz setting, and to limit the capacitive load to a maximum of 30pF. |
H: How do Ethernet controllers show activity with the LED indicator?
Ethernet controllers usually have an "activity" LED which blinks in some sort of correspondence with the packets traversing the port.
I'm wondering how the controller decides when to turn the LED on and off. It's not a random or timed flashing when data is being transmitted - it is somehow linked to the data transmissions. But the individual transmissions would likely happen much faster than our eyes could see (or at least it would look like extremely fast "blips" on the LED, or the LED would just appear to be illuminated solid).
How do these controllers convert the extremely fast pulses to something that resembles "activity" to our brains?
I remember seeing somewhere (some odd piece of hardware) that the indicator speed could be adjusted, but I can't remember where I saw that, or how it applied to the port.
AI: How do these controllers convert the extremely fast pulses to something that resembles "activity" to our brains?
That's pretty much up to the controllers. But yeah, you can savely assume that every packet that the MAC layer detects turns on the LED, and just resets a counter to full value, and that this counter is counted down with a fixed clock, and when it turns zero, the LED turns off.
Or some variation thereoff that allows short packets to be seen.
Maybe throw in a state machine that allows for "blinking" in case of prolonged activity. Just some cheap digital logic in the fabric. |
H: Cannot connect microphone using audio jack
I have a microphone and a female 3.5 mm audio jack. I want to use the jack to get the signal from the microphone for a project with an arduino. To test if the microphone works or not, I'm trying to connect it to an oscilloscope, however no signal from the microphone is received, all I see is noise.
[
[
There are no problems with the individual components. I set the oscilloscope range to be low enough. Can anyone suggest why I'm not receiving a signal? Am I not wiring the microphone correctly?
This is all from a guide on how to connect a microphone to an arduino:
https://www.instructables.com/id/Arduino-Audio-Input/
My schematic is a shortened version of the one on that blog.
AI: You have it wired completely wrong.
Your batteries are simply short circuited - they are probably dead by now.
Your microphone is also connected wrong. That appears to be an electret microphone. It needs a current source, but not like you have tried to hook it up.
You have the oscilloscope probe shorted to its own ground.
Your microphone connection should look like this:
(Image borrowed from this other question.)
The point marked Vcc goes to the positive end of your two cells, the ground end goes to the negative end of your two cells.
You can set your oscilloscope to AC coupling and leave out the shown capacitor.
Connect the scope probe to the junction of the resistor and the microphone.
Connect the ground of the scope to the negative end of your cells.
If your batteries still have any juice, you should now be able to see a signal on your scope.
You appear to be working from this diagram from the instructables page:
The "amplifier" block is incomplete. There is much more that goes into it than the drawing shows.
The battery symbols refer to 9V batteries rather than the 1.5V cells you appear to be using.
The batteries are connected in series, and then to the amplifier rather than being short circuited as you have them.
That "instructable" is as poor as everything I've learned to expect.
It goes into some detail about simple things because it is supposed to be for beginners. Then, it tells you to "build a non-inverting amplifier" out of the TL072, but gives no information about how - it just leaves the beginners it is intended for completely hanging. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.