text stringlengths 83 79.5k |
|---|
H: Reasons for choosing a transistor instead of an op-amp as a buffer
I have often seen circuits for guitar pedals that use an emitter-follower configuration as the input and output buffer (see example below). If an op-amp can provide a greater input impedance / lower output impedance, why are they not used instead?
Is it simply the case that a single transistor circuit is cheaper to mass produce? Some people (audiophiles) say that transistors sound better, claiming that op-amps have a more "clinical" less warm sound. I am quite skeptical about this since there is no obvious difference in the frequency response at audible levels.
If I'm not trying to mass produce a pedal and I'm looking to build something of quality, is there any reason not to use an op-amp as a buffer? In your experience, do op-amps have any intrinsically negative effect on the sound?
AI: From a pure engineering point of view an opamp circuit can be made such that it will have less distortion than a simple emitter follower as shown in your schematic. I can build both (in real life and/or in a circuit simulator) and prove that with hard numbers.
That is not to say that the simple emitter follower is worse or better (for audio) than an opamp based solution. What I can measure/simulate are only certain aspects.
Some prefer Tube amplifiers which most of us know add distortions. So the most distortion free solution from an engineering point of view might not be the best sounding one.
Regarding your simple emitter follower: as commented, it is a well known, simple to use, robust, proven, low cost solution. There is no reason to use an opamp. The lower output impedance isn't always needed. And low enough is low enough. There is no need to go lower than "low enough" now is there?
I am quite skeptical about this
and you should be, the emitter follower solution does the job, no need to make things more complex than needed. |
H: Are audio cassettes FM or AM?
I know that cassettes store an analog signal, but is the signal Amplitude-Modulated (AM) or Frequency Modulated (FM)?
AI: Neither, there is no modulation involved. The magnetization on the media is directly (and hopefully fairly linearly) related to the waveform amplitude.
There is a high frequency bias signal added to the audio signal, to get the resulting signal on the tape to a linear range of the magnetization curve, but the signal at the head is a sum of the two, not a modulation.
That image is from here, a good description of the process. This is stressing my memory of tape decks I had 40 years ago... The one instance I recall of amplitude modulation (kinda) was Dolby HX, which changed to amplitude of the bias in response to high amplitude audio signals, to keep the resulting signal from going into saturation. https://en.wikipedia.org/wiki/Dolby_noise-reduction_system#Dolby_HX/HX-Pro |
H: Is there current flowing out of an Operational Amplifier output?
I came across the following self-study problem, taken from The Analysis and Design of Linear Circuits (8th edition).
I was not intending to solve the problem, since I am just looking over all problems in the book. Something caught my attention, however.
I was looking at node c, the one labeled with \$v_c\$ in the Figure. If I were to apply Kirchhoff's Current Law to then perform node-voltage analysis, I would not know what currents to write down. I know the current flowing into the positive input of the second OpAmp is zero, so that means there is either current going into/coming out of the output of the first OpAmp, and current through the 150K resistor; or else, no current goes through the 150K or out of/into the first OpAmp output. Then, what is the point of cascading two OpAmps?
AI: Yes, current can flow into and/or out off the output of an op-amp.
However, an op-amp provides a voltage output. It is the circuit that surrounds the op-amp that dictates what current will flow into/out of the op-amp's output.
To apply hand analysis to your circuit you would assume:
\$V_b = 0\$ (due to op-amp action)
\$V_e = V_c\$ (due to op-amp action)
Apply KCL at the \$V_b\$ and \$V_o\$ nodes.
You now have 2 equations with 2 unknowns (\$V_o\$ and \$V_c\$). |
H: Electrical Power production response to complete transition to electric vehicles
Just wondering if current world wide power grid capacity could handle a complete transition to electrical vehicles. If every existing vehicle was electric would the existing power production handle it on a global level?
AI: It doesn't matter.
Burning fuel to charge batteries as the absolute worst case has somewhere between similar, and slightly better efficiency than burning fuel in a car, as the motor can always be run at optimal efficiency.
For trains, this has been done for ages -- quite a lot of diesel engines are really "serial hybrid" setups where a diesel generator feeds an electric engine, as this is more efficient. This is also being done for some cars targeted at customers that require individual mobility with long range.
This is the worst case, however. Since electric cars do not require grid power while they are running, but rather when they are idle, it is possible to schedule the charging cycles during off-peak times. Electricity companies are already introducing remotely managed charging ports that are turned on only for a few hours at night, and house battery solutions that can be charged from a rooftop solar panel, all of which reduce the strain on the grid that is introduced by electric cars.
Total demand however still increases. Right now, excess energy produced at night is stored by pumping water uphill, when we charge cars instead, this energy will be missing throughout the day, when consumption exceeds production.
There are also be non-technical effects on grid load, such as telecommuting, online shopping, automation in manufacturing (there are several factories that usually run in the dark as there are no humans inside), ... All of these effects combined shape how and if the grid changes to accommodate demand.
I'd expect that in the next ten years, demand for individual mobility will go down quite a bit, while logistics demand will rise even further, so many electric cars will be commercial, and they will be charged over night in industrial areas. |
H: Why do our electrical utilities use transformers way over their rated KVA?
My parents' house and my house have different electric utilities, but both are rural electric cooperatives. We both have 120/240V (60 Hz) split-single phase service (this is in the United States state of Tennessee). My service is sized 600A. My dad's service is currently sized 200A, but it's getting upgraded to 400A because his property has outgrown 200A.
My understanding is that transformer sizing should be fairly simple. Volts times amps equals volt-amps. Size your transformer for the volt-amps. I also get that utilities tend to under-rate things and run them "hot," but this just seems extreme.
(Related question: I was never clear whether the "volts" in this equation should be 120 or 240. Is the transformer rating based on the combination phase voltage, or the split phase voltage?)
My transformer has "37.5 kVA" stamped on the side of it. 600A x 120V = 72 kVA. My transformer is 52% the size it should be. Not terrible, but still seems very under-sized.
My dad's transformer (before this morning) had 1.5 kVA stamped on the side of it. They "upgraded" it to prepare for his 400A service upgrade. Now it says 2.5 kVA. 200A x 120V = 24 kVA. 400A x 120V = 48 kVA. His old transformer was 6% the size it should be. Now his new transformer is 5% the size it should be. That's even worse, by a huge amount. That's just ... crazy small.
This white paper backs up my calculations. What am I missing here? This can't possibly be correct.
AI: Distribution transformers have large masses of metal and (often) oil in play, they can run at large overloads for a LONG time before the heat builds to a problematic level.
Because of this they are often sized for the RMS load integrated over a period of several hours, which allows then to be far smaller then you would expect from a maximum rated breaker calculation.
Remember also that IIRC the NEC specifies that loads should not be more then 80% of breaker rating, so that 600A service is really 480A design maximum, and how often are you maxing that out for more then a few minutes at a time?
Incidentally if the breaker is 600A in each leg of a split phase service then the power is 600A * 240V = 144kW, but I would not be at all surprised to see a transformer somewhat smaller then that used, as I say integration times are a thing in this game, 50% overload during peak hours is a pretty standard place to be, but the load mostly disappears over night so everything gets to cool.
I have seen an amusing problem when a potter took on a light industrial building and installed two rather large three phase kilns (Only one could be run at a time, but once the first one starts cooling, spark up the second one), seems the local transformer did not appreciate back to back 12 hour raku firings.
I am somewhat surprised that 600A single (split) phase service is even a thing, over here if I wanted 144kW service it would be three phase. |
H: Need a circuit to convert 230V sine wave into 5V square wave
We have already obtained our result using transformer for our college project ,but can't find circuit to convert 230V sine wave into 5V square wave which is supplied to microcontroller.
Edit:
Our team got this circuit from web,but not sure whether it can handle 230V main supply and generate 5V square wave.
AI: The other technique would be to use an optoisolator (with appropriate isolation barrier ratings) along with some current limiting resistor(s). For a nominal LED drive of 10-20mA drive you'd need about 24k-12k. Overall dissipation is about 2.2W-4.5W, which would get warm, but you can either get a power resistor, or share the load with multiple series/parallel combination resistors (make sure they can take the 230VAC range). Several things to watch for in terms of safety. Flameproof resistors are recommended, put a diode across the optoisolator LED terminals but anode to cathode, to prevent the LED from seeing too much of a reverse bias voltage on the other half of the AC waveform (improves reliability). MOV and PTC is nice to have for incoming transients and faults. Verify that the layout physically isolates the AC from the low voltage DC portion (creepage, etc.) You want to make sure that any fault will not cross over to the low voltage side. |
H: Half-duplex wireless: Is a tuned antenna more important for the transmitter or the receiver?
I intend for this to be a general/conceptual question that could apply to any wireless communication hardware, however I will use a specific example to talk about.
Let's consider the nRF24L01. (Datasheet).
(Image from makerlab-electronics.com)
As shown above, the standard module uses a PCB antenna.
There is also a version that uses a screw-on omnidirecitonal antenna (AKA "whip"):
(Image from ebay.com)
Let's say that a pair of PCB-antenna transceivers give a usable range of 10 meters.
Let's also assume a pair of whip antenna transceivers exhibit better range. (The common consensus I've read is that there is a noticeable improvement. Let's say it results in a range of 20 meters).
My question is about having a mixed pair of antennas for half-duplex (single direction) communication. What if one were the PCB antenna, and the other were the whip antenna... Which transceiver (transmitter vs receiver) should use which antenna? Would it even matter?
I'm interested from both a theoretical and practical standpoint, as they are often different. One extra variable I can think of is the orientation of the antennas.
I'm also open to alternate title suggestions from someone who knows better terminology.
AI: If these were the only two devices in the world, and if the transmitter were ideal it would not matter.
However, neither is the case.
Using the "better" antenna on the transmitter would help make the transmitter signal dominate over other sources at the receiver; wheras using the better antenna on the receiver would increase the level of both desired and also undesired sources.
Using a "bad" antenna is more likely to negatively impact the operation of a practical transmitter than it is a practical receiver. This is likely a small issue - power levels and durations are low, so you're not really going to overheat the amplified due to a mismatched antenna. And the oscillator here is probably sufficiently decoupled from the antenna output that it is not going to be pulled off frequency by the antenna.
But even that is not the end of the story
Legal limitations on transmitter power typically include not only the source power, but also the effective output from the antenna. So using a better antenna may mean you need to turn down the transmitter power level
It's not absolutely clear which antenna is "better" - we might guess the plastic-encased whip is better than the PCB trace antennna, but that might not be the case. You might for example have been given an antenna made for a different band, or one that is mismanufactured.
Since you're actually talking about swapping the entire modules and not just the antennas, there could be implementation issues on either module that made it underperform |
H: Measure ~300kHz Induction Heater RF Power
I have been tasked to gather data from an old running 335kHz 5kW induction heater system. It has no outputs available to give amperage, power etc. Basically there is an assumption that the unit is not doing a good job of controlling a constant power output. Given a constant power set point.
I was thinking about trying to measure the RF field generated by the unit to determine how consistent it is. Although most power meters for RF are for >1MHz... So I was considering using an AM radio front end design to receive and rectify the RF across a resistor, then log the analog voltage. The image below is a basic idea, obviously I would recalculate L,C to be resonant.
This question is also giving a solution similar... Measuring relative rf signal strength
I don't care about absolute accuracy, just relative. I can mount an antenna on a PTFE plate that will not move, so that controls device distance etc. Any feedback or alternative options?
AI: I would think measuring the magnetic field of the induction heater with another coil might be the way to go. With an inductor the primary filed is magnetic. Since magnetic fields fall off with the distance cubed (1/R^3) it would be difficult to pick up other sources of magnetic noise at 300kHz. You could find a coil with a response at 300kHz or a magnetic sensor with a response at 300kHz. |
H: Are multiple groundplanes a good idea on a 4-layer pcb?
I'm working on a design of a 4-layer PCB. The stackup is like this:
Top: signal
Inner 1: power distribution traces and copper fill
(VCC) in some areas
Inner 2: signal
Bottom: ground plane
There is a lot of unused space on the signal layers. I'm tempted to fill this to give the board fab a break.
But, should I leave these unconnected? Or connect them to ground? Or should I just not fill these spaces?
AI: As a matter of course, I always fill all unused areas with ground - just make sure you scatter vias around so that all the layers are well connected to each other, so that they essentially look like part of the main ground plane. DON'T fill with ground pours but only have them connected to the main ground plane at one point.
This gives you ground plane closer to signals and power in many areas of the board.
Also, as a rule of thumb, I'd put the ground layer next to the VCC layer. But, with most designs, you can get away with a heck of a lot (unless you have high speed digital, RF, or analog signals), and often it's not worth worrying about it too much. |
H: Identifying polarity of capacitor from PCB printing
I took out the old capacitor and didn't remember to check the polarity because I assumed the board would have more standard markings. Is there any way to figure it out with a meter or by the printing?
AI: The pin on the left is connected to the large power plane, the next task is to figure out if it's positive or negative with respect to the other pin, since the HC125 is connected to it, it would be the ground plane, but if the board has multiple voltages, you may have dig a bit more. If the right pin is connected to pin 14 of the HC125 then it would be positive wrt the ground plane connection. |
H: What is signal resolution?
I'm trying to understand signal resolution, so therefore I am looking for a more clearer explanation. Primarily signal resolution is linked to an Analog-Digital converter.
Quoting from new electronics, http://www.newelectronics.co.uk/electronics-technology/whats-the-difference-between-resolution-and-accuracy/73740/, states: “The resolution of an A/D converter is defined as the smallest change in the value of an input signal that changes the value of the digital output by one count.”
In addition, I have read many more unclear definitions. Specifically, what is signal resolution?
AI: "Signal resolution" refers to how detailed the waveform is allowed to be. Analog signals (red line) have an infinite amount of points that can be attained between a "high" and "low" - this is a reflection of the real world, since we can always find an infinite amount of points between two finite points.
The problem comes when you need to store this data digitally. You can't give a computer an infinite amount of points, since it would require an infinite amount of memory. Therefore we choose a certain "resolution", or amount of attainable points between "high" and "low" in order to describe the waveform.
In the sense of an A/D converter, we need to represent this "infinite wave" in memory. We can use an 4-bit number (for example) to represent the points between "high" and "low". This would result in 16 possible (vertical) points - see the figure below. If we chose a 10-bit number, it would result in 1024 different possible points, and thereby a greater signal resolution.
“The resolution of an A/D converter is defined as the smallest change in the value of an input signal that changes the value of the digital output by one count.”
- They are simply saying here that, if the A/D converter has a low resolution, then the red line needs to change quite a bit before the black line will jump to the next point vertically (in memory).
Note: "high" and "low" simply refer to the max and min possible values of the signal. |
H: Transformer 2x24/2A secondary output
I’m currently learning about transformers. I have a quick question regarding the specification of a 2x24/2A transformer.
Here is a picture of the transformer:
Does 2x24V mean that this transformer has dual secondary windings that creates 2x 12V 1A output or does it have two 2x 24V 2A or maybe 2x24V 1A? I’m quite confused about how the secondary output works on this transformer.
Crude information about the transformer:
Transformator 2x15VAC 72VA
Primary: 220VAC
Secondary: 2x15VAC 2.5A = 75VA
AI: simulate this circuit – Schematic created using CircuitLab
Figure 1. Various wiring options for your transformer.
The secondary arrangement allows various configurations of the transformer output, making it more versatile at very little extra manufacturing cost.
Fig. 1a: Independent 24 V outputs, each rated at 2 A.
Fig. 1b: 48 V, 2 A secondary with a centre-tap.
Fig. 1c: 24 - 0 - 24 V, 2 A secondary. This is the same as 1b except that we have chosen a different point to call 0 V.
Fig. 1d: Parallel connection of the coils results in a 24 V, 4 A output. |
H: Precision Rectifier confusion
Its been a while I took a long hiatus due to not having a scope any longer, but I finally saved up and got myself one.
I was playing around with a precision rectifier (OPA350PA) and I came across something I don't understand.
Link to datasheet: OPA350 DATASHEET
What's going on with the positive cycle? It's somehow losing voltage somewhere. To make things more confusing for me I did it on paper and the diode is open (R.B) during the positive cycle so there's no current flowing thru the resistive network thus making all the voltage at each node the same as the input. Meaning Output should be equal to input.
Two case scenarios:
Case 1:
(Orange input / Green output)
Case 2:
simulate this circuit – Schematic created using CircuitLab
AI: You haven't provided details about how you are measuring the signals, but I'm going to guess that you are using an oscilloscope with 10 Meg probes.
The drawback of this circuit topology is that the output impedance is different between positive-going half-cycles and negative (on the input). The output impedance is essentially zero for negative half-cycles but is the sum of the two resistors for positive half-cycles.
In your second example, the output impedance of the rectifier is 200k. Work out what the voltage drop is with your 10M scope probe and you will most likely find the value that you calculate matches what you are measuring. |
H: measuring output impedance of op-amp
I've just took for granted the fact that the input impedance of op-amp is high and output impedance is low.
I was reading "the art of electronics" lab book and there is an exercise that asks you to measure the output impedance of an op-amp by having the negative input connected to the output (1) and after the 1k resistor (2) while having and removing the 1k load resistor.
It isn't clear to me how it allows them to measure the output impedance of the op-amp.
The only equation I could write down was with the load resistor in place and the negative input connected to the output of the op amp, I have the following equation
\$\displaystyle V_{out} = V_{in} \frac{R_{load}}{R_{load}+1k \Omega + Z_{out}} \$
I thought this would give you the output impedance of the op-amp.
I wasn't sure what the rest of the steps are for.
From(Learning the Art of Electronics: A Hands-On Lab Course)
AI: The exercise is as follows.
In configuration feedback #1
You measure V_out without R_load, with some high impedance voltmeter. Say you measure
5V
Then you attach R_load. You get an output of 2.5V. You deduce the output impedance is 1k (which is expected because you have a 1k resistor in series with the load)
Then you switch to feedback #2
You measure V_out without R_load, with some high impedance voltmeter. Say you measure
5V again
Then you attach R_load. What do you get? Well, you get 5V. You are forced to conclude that the output impedance is 0
Moreover, for any other load, 10K, 500R, 2K, you'll get 5V. Agree?
So, what happened? Feedback is producing the low output impedance. You can think the opamp is compensating the drop on the 1k resistor (raising the opamps output voltage). |
H: Single MOSFET motor driver with oscillation on output
I'm trying to drive a small DC motor with a PWM with the circuit below, where I'm using a TIL111 optocoupler and a IRF510N MOSFET. The PWM (50% duty cycle) is generated by a MCU STM32F103C8T6, the +3.3V source comes from this MCU and the +5V source is another one.
The motor drives, but the waveform over the motor looks like the one below (sorry for my oscilloscope). The image isn't clear but what happens is that this output have some waves (resembling coil discharging waves) when it should be off, or 0V (the ON semi-period is working ok). Since I have a +5V source and a 50% duty PWM, I should measure 2.5V over the motor, but, with this signal, I'm measuring 3.55V.
If I remove the motor and put some 120 ohm resistor in its place, I can visualize the correct waveform below. I also verified the waveform over the opto and over the MOSFET's gate, and it's working fine (with or without the motor).
I'm using this dummy dc motor below, but I have tested with other two dummy motors and having the same result. Actually, I tried to drive these motors with a TIP before, using a similar circuit; that time, I changed pretty much all the components (TIPs, resistors, diodes, motors, protoboards) and I still ended with this same result. If someone could tell me some tip about why I'm having this behavior, I would be grateful.
AI: It would be more illuminating if you trigger your oscilloscope off of FET gate -- that should be a cleaner signal.
What's happening is that the circuit will push current into the motor, but it won't pull it out. So when the FET is on, if the motor is spinning any slower than it's no-load speed with 5V applied, the motor will speed up. When the FET is off, the motor will coast along, generating a voltage proportional to its speed. Because the system is lossy, the motor isn't spinning quite fast enough to generate 5V -- but it's pretty close to it. You're reading the average of 5V and whatever the motor generates as it's coasting. |
H: How are electric under blankets fail safe?
Electric under blankets are mains powered and yet you lie on them. 240V in the UK. They appear to effectively be a single resistive heating element. Clearly they're not earthed as that would be difficult, and I can't see how double insulation could apply.
It seems to me that if the element breaks, it could poke through the material and cause a shock hazard to the sleeper. And what if my old dog wees on it, or bites it? If they were a danger, they wouldn't be sold. So how are they made fail safe?
AI: Since 2001, all electric blankets now have a safety mechanism that will kill power if it detects that the element is broken. Many are now operated using 12 or 24VDC as well, so there is less danger of shock if something goes awry; the control unit is also a DC power supply.
If your dog wees on it and the element is not already damaged, nothing will happen, the heating element wire is insulated. But you can't really wash it, so don't let that dog on the bed! |
H: Driving power + USB over cartridge connector
I hack old hardware (mostly microcomputers) by putting new insides in them, but I still consider myself a newbie when it comes to electrical engineering, so please bear with me, because this question might be painfully obvious.
On my next project, I want to put a Raspberry Pi into a dead TI-99/4a computer. I don't want to modify the case, but I would like to use its features. So I am thinking of 3d printing a "cartridge" that would house the pi and a trackball/touchpad and plugging that into the cartridge slot on the TI.
I want to plug in that 3d printed "cartridge" into a matching socket and run a USB and power (5V 2.5A) on the cartridge traces. My plan is to hijack the cartridge and socket by soldering on wires to translate them back into USB and power plug which will run inside the case.
My question is - is that safe? Can I run whatever over a cartridge connector the same way I could run it over a wire?
Thanks in advance.
AI: Oh, it's "safe" enough, but the problem will be maintaining adequate signal integrity for the high-speed USB data lines. The impedance requirements are very stringent.
For the power, you'll want to use multiple cartridge contacts for both the +5V and the ground connections.
Where do you plan to route the video? HDMI is another high-speed interface that has strict requirements. |
H: Straightening the pins of a resettable polyfuse
I have resettable polyfuses like the picture below.
I am working on some PCB boards where I solder the wires myself on the PCB. Because it is very hard/cumbersome work to solder very small wires, I use the leads of components to make a path to a nearby component, preventing small wires or tin to be soldered.
However, I am wondering about the specific shape of the leads of polyfuses. Why do they have the bent in the leads (unlike resistors, capacitors and other components)?
Is it that the fuses need to 'stand off' a bit from the PCB, or can I make the leads straight to push the orange part very close to the PCB, to get some extra length of the leads if needed?
AI: These components are thermally-sensitive. As any thermally-triggered device that rely on joule dissipation and corresponding rise of body temperature, they need certain ambient conditions and thermally conductive paths to have certain heat flow balance. Obviously encapsulating these devices into some different thermally conductive environment will change their guaranteed parameters. These bents are there for easy assembly to assure that the device has thermal environment for which it was specified.
Personally I think it doesn't matter much how to mount them, their turn-off parameters are so imprecise, so small differences in thermal resistance to PCB do not matter much. With shorter leads you will have a somewhat higher turn-off threshold. |
H: Rectifier, Flyback vs Transformer, Rectifier, Buck
I've just finished doing my first off-line flyback, which proved a success. A simple block diagram is as follows:
I know this topology seems common now, for example it is used in Apple USB Charger wall bricks (that video discusses cheap-knock offs, but the principal is still the same).
The problem is, flybacks are very complex beasts to tame. So I started thinking why isn't the following topology just used instead more commonly:
For the record, I have never designed a power supply using this method (particularly, using a high-voltage buck). But on paper I can't seem to see any problem with it.
Is there any reason one might be preferred over the other?
Both provide the same level of isolation, and depending on the parts used, potentially the same efficiency.
From the block diagram you can say the first process has "less steps", but that doesn't necessarily mean less parts, lower cost, or easier to design by any means.
AI: The main idea of switching type mains supplies is getting rid of the huge, heavy, costly 50Hz transformers.
The size of a transformer is mainly given by its iron core, and that size is controlled by the amount of magnetic flux you have to run through it. You need the core "eat" all the flux (voltage-time) you are stuffing into it during one half-period of the AC on the primary side, before you demagnetize and counter-magnetize it again in the second half-period. The lower the frequency, the longer the time of a half-wave, the more flux.
If your core cannot "eat" all this flux because it's too small, the iron is overexcited and "vanishes" magnetically for the excess flux. You can see this as enormous peaks of primary current if you e.g. run a 120V primary transformer on 240V, or a tightly sized 60Hz transformer on 50Hz. The transformer overheats then, even if there is no secondary current drawn. |
H: STM32F103C8T6 3.3v pin max output
Im using STM32F103C8T6 with sl4432 as small radio beacon. It's working fine, but range much shorter than suppose to be. Sl4432 require max 85 mAh for 20dbm transmit, but how much STM32F103C8T6 3.3v pin can give at max? I can't find that info in datasheet and it's not GPIO pin anyway.
To be more specific, I power up STM32F103C8T6 from blue circle. And power up my sl4432 from green circle.
[]
AI: The 3V3 pin in your dev board is not an actual MCU pin but the output of an onboard 3.3V linear regulator. Also, both your green and blue pins are internally connected, so the max current you can source out of that pin is the max current your power source can provide.
I would recommend removing the onboard LDO if you are providing the 3.3V externally.
You can check out the schematic at the stm32duino wiki for the Blue Pill:
https://wiki.stm32duino.com/index.php?title=Blue_Pill |
H: Simplification Made to Gain Equation
I was watching one of Razavi's videos
https://youtu.be/pK2elUcXWzs?list=PLiDoPUX9nLkIw9EnIv_3K19wlcyJ6msYd&t=2309
Skip to 38:30
He did this:
He justified this approximation by saying that gm1ro1 and gm1ro2 >> 1. But how does that get rid of the extra +ro1 at the end?? Why is it not there in the end?
AI: Since \$g_{m1}r_{o1} \gg 1\$, we can neglect the 1 inside bracket. Then,
$$R=g_{m1}r_{o2}r_{o1}+r_{o1}=(g_{m1}r_{o2}+1) r_{o1}$$
Since \$g_{m1}r_{o2} \gg 1\$,
$$R= g_{m1}r_{o2}r_{o1}$$ |
H: How do I specify a low RFI LED driver?
I bought a bunch of very cheap 18W LED lamps. I love them; they look great and the light is excellent, however the drivers throw out a lot of RFI and is interfering with my radio (quite bad at 50MHz, but also significant from 25MHz-150MHz). The interference is being caused by the driver (I can outline how this was diagnosed if helpful; the radiation all seems to be coming from the transformer though it decreases a little when I add common mode chokes to the AC input and DC output wires). This has to get fixed.
I have been reading many of the related questions regarding LED drivers and have learned a lot. Am experimenting with capacitors and ferrite to reduce the RFI. Ultimately, I suspect I will end up replacing the driver. How do I know the new driver will be any better than the current one? What should I look for? Are there any certifications that matter? There are plenty to select from at the usual electronics component stores.
In case it helps, I think this is the driver with the RFI condition: https://www.dhgate.com/store/product/led-driver-input-ac85v-265v-output-36-63v/259582128.html The resemblance is uncanny; the specifications are the same, but my driver's PCB has the DC output wires slightly closer together.
AI: EMI is a major problem with LED drivers, especially in automobiles.
And there are many methods to minimize.
It's very likely a combination of things.
The electrical design, the PCB layout, and no enclosure.
Discontinuous currents are the most likely to generate EMI.
I do not see an inductor on the output so it can't be pretty.
The constant current (CC) circuit monitors the output current and adjusts the amount of current by turning the output off and on very quickly (switching) which is typically at a frequency between 100 Khz and 1 Mhz. How long the output is on and how long it is off (duty cycle) is how the amount of current is regulated.
A CC driver should then smooth out the on of off pulses which is typically done with a inductor with an inductance tuned to the switching frequency. High frequency switching allows the use of smaller less expensive inductors, and vise versa.
This image shows the on and off pulses (LX) that are regulating the current.
The ILED shows the pulses after they go through the inductor and output capacitors.
This is an example of continuous conduction.
Meaning the current through the LEDs never goes to zero.
This is an example of ILED in discontinuous conduction mode.
Source: Understanding Buck Power Stages in Switchmode Power Supplies
In the schematic below you can see the LX pin on the regulator and the inductor highlighted in blue (the part yours does not have).
It's the LX pin the turns the current off and on and the inductor that smooths it out.
The parts highlighted in yellow are not part of the constant current stage. They are added to minimize conductive EMI output to the power source.
The upper right hand is the "input EMI" filter to minimize conductive EMI back to the power source (input).
A 1µF cap was put across the output to the string of LEDs.
This cap and a ferrite bead was added to minimize the radiated EMI.
This was added to meet EN 55022:2010 Information technology equipment– Radio disturbance characteristics– Limits and methods of measurement
You need to use a battery powered radio and an electric powered radio to determine if the source is conductive EMI from the power wires and or from radiated EMI (airwaves).
The PCB design is very important for minimizing radiated EMI.
The tracks on the PCB are antennas.
For example from the above schematic D1 and L1 should be near the LX pin, and CVCC should be near the VCC pin, and the connecting copper traces should be short and thick.
I always have two PCB layers for ground and power. Even if the PCB lay could be done on two layers, I use a four layer board with the two power and ground. If I can do a single sided layout and do not expect huge EMI issues I will use the one layer for ground.
The enclosure
When I design a product I enclose it in an aluminum box where the bottom and top overlap and leaves no gaps (almost air tight).
Any connectors are EMI shielded. If AC mains is used for power a EMI filter is added to the power line.
Recomendation
If you want it done right, the driver used by the high end LED fixtures is the Mean Well HLG line.
Notice the hermetically sealed aluminum enclosure.
Has powerline EMI filter.
Has power factor correction.
Is priced between 25¢ and 75¢ per watt. 185W-240W=>25¢, 40W=>75¢, 60-100W=>50¢
Up to 94% wall watt efficiency
7 year warranty
Does not interfere with your radio.
Mean Well is the only vendor I use for CC or CV power supplies. |
H: Placement of feedback line on DC DC converter, before or after cap? (and dropout problem with LT8390)
I am using an LT8390 DC to DC converter, I have a highly inductive load and it is causing dips. The dips are shown here (and about a 6V drop for 7ms):
I need to get rid of the dips, which I will increase the output capacitance from 100uF electrolytic to 1000uF.
The layout is such that I have moved the feedback line after the filter caps, thinking that this would keep the line more steady at the point I wanted it controlled at. I'm wondering if this could be part of my problem.
From a control stand point, where do I want to place the feedback line entry point in the circuit?
Pictured below are the schematics, is A or B better?
Entry point A with feedback line before filter capacitors:
Entry point B with feedback line after filter capacitors:
Edit, here is the section of the board that has the feedback trace, I have ran it directly to the output of the board, instead of before the output cap.
AI: From their reference schematics
it is clearly seen that the feedback wire (R18) comes from the main filter cap C20, and not from the point of load. It is actually in between two caps, C19 and C20.
More, your entire layout has no resemblance with the suggested demo board:
All high-current loops in your layout have skinny traces instead of being solid rectangle of copper. I am afraid this is where your output ripple problem comes from. |
H: Replacing burnt traces
I have a tiny PCB that I've been working on and off of for a while now. I received it with leaking batteries, and it had corroded some of the traces off the board. Most recently, I tried redrawing the burnt traces with conductive paint, but to no avail. I actually popped batteries in it last night and had activity for the first time ever, though it was short. I then scraped off the paint and decided it might be time to try a jumper wire.
My question is: there used to be a trace pad that completely came off the board, do I need to worry about that? Is it possible to skip the pad entirely and make a bridge between the two sections of trace?
The missing pad
The two points I would bridge
I'm familiar with soldering, but not with specifically making a jumper wire. Mostly I'm trying to find the right kind of wire to do it with and where to purchase it.
AI: You can use a thin single core wire (which you can easily form into any shape) and solder it directly to the smt resistor from where the trace begins. Strip the wire slightly and make a loop where you want the pad, and complete the trace by soldering it to the via on the right. Choose a thickness of the wire that you can insert into the via. Not sure about your soldering skills, but shouldn't be much difficult. |
H: Need help explaining behaviour of a circuit
just can't wrap my head around this one for some reason. I am sorry if it's easy for you to see. Just confused on the duality of things in circuits, Is it a filter? Or is it a peak detector? How do you know which topology to apply?
I am only talking about the positive cycle of the circuit.
Essentially the current runs from R1 -> R2 -> C1 and R3 -> GND
This cycle charges the capacitor, however we could simplify the circuit knowing which path current is taking.
As D1 is R.B And D2 is F.B and The Op-Amp isn't doing anything.
simulate this circuit – Schematic created using CircuitLab
Which becomes this. My question is, isn't this a low pass filter? If it is why is that the output signal isn't attenuated then? Why is the input voltage at the output?
simulate this circuit
AI: Just my opinion: it's a pretty bad implementation of a full-wave rectifier followed by a poor implementation of a (sort of) peak detector.
It's a bad implementation of a full-wave rectifier because the gain of the input positive peaks is significantly less than the gain of the input negative peaks.
It's a poor implementation of a peak detector because of the forward voltage drop of diode D2 and the very short time constant of the output filter and load resistor (C1, R3).
You can do much better with some minor circuit changes. |
H: Optimizing magnetic field force of a coil
For a personal project, I've done a redesign of Samy Kamkar's MagSpoof project. Basically, it is possible to use an electromagnet, h-bridge, and microcontroller to trick a magnetic card reader into thinking a card was swiped, and feed it arbitrary data. I've designed and soldered up a small PCB that has USB charging, Li-ion pack, and a smaller h-bridge driver (DRV8835 - no fun to solder) than the original.
It works beautifully when running of 5V ISP power, with a ~100 turn coil, 30mm diameter, 30AWG wire. The problem is, when I run it off the lithium ion battery (~3.7v, the battery is connected directly to the h-bridge driver through a switch), the magnetic field doesn't have enough power to transmit meaningful data to the card reader - the card reader simply spits out an error message, indicating it saw something but didn't catch any data. With 5V power and the coil configuration described above, I can hold it more than a couple inches away and still get reliable transmission.
I have since added a boost converter to the board to bump up my voltage to 5V. However, I would like to determine the coil design that will produce the strongest magnetic field.
How do I go about determining this? As I understand it, adding turns will increase field strength, but adding turns will increase the coil's resistance and therefore reduce current. Therefore, there is an optimum number of turns somewhere in the midddle.
How would this value be determined?
AI: As mentioned in another answer, the magnetic field strength is proportional to amps and turns. However, the current is dictated by drive voltage and coil inductive reactance. Given that inductance (for a tightly wound coil) increases with turns squared it's usually counter-productive to add turns. A simple example is doubling the turns - the inductance rises 4 times and the current therefore falls 4 times and the net effect is that ampere-turns have reduced by two.
As I understand it, adding turns will increase field strength, but
adding turns will increase the coil's resistance and therefore reduce
current.
If your coil design is at the point of causing worries about coil resistance then it's not a good design - the dominant impedance should be reactive as implied above.
There is always an optimal value but this depends on several factors that you haven't really told us about: -
Oscillator design
Tuning regime
Current available for the oscillator/coil
Coil design/shape etc.. |
H: Debouncing IC input resistor?
I've recently been working on a breadboard compatible compact debouncing PCB. I found the MAX6816, a debouncing IC - check the spreadsheet here.
I made the below simple circuit as per the "Typical Operating Circuit" on the data sheet.
The datasheet does not show a resistor between ground and the button here:
Should I add one to my circuit or would the IC include one?
Thanks for your help in advance.
AI: The IC has an internal pullup resistor. Like most resistors in ICs it is loosely specified (a 3:1 range) and (because the manufacturer wants to maximize the market) rather on the high side.
So the minimum switch current with a 5V supply is 50uA. Some switches would be more reliable with a higher current (often specified on the switch datasheet as minimum current) and if that is so you can add an external pullup resistor, so the effective pullup will be the parallel combination of the internal and external resistors. |
H: What is the transfer function of an ideal buffer in time domain?
Convolution is is defined as the integral of the product of the two functions after one is reversed and shifted. And in a system where its transfer function is g and the input is f, the output is the convolution of these two functions.
And if a signal is buffered, an ideal buffer should output the input exactly the same.
Lets say the input is f(t) and the transfer function of the buffer is g(t), so convolution of f and g should yield f as the output.
Can we then say the ideal buffer is a Dirac impulse at origin as a function of time?
AI: You can say that the impulse response of an ideal buffer is a Dirac pulse. Which is actually quite obvious, because the impulse response has to return the impulse response again in its entirety.
Other variations on the matter:
$$\begin{align}
y(t) &= u(t)\\
&\Updownarrow \mathcal{L}\{\}\\
Y(s) &= U(s)\\
&\Downarrow\\
G(s) &= \frac{Y(s)}{U(s)}=1
\end{align}$$
If you want to express this linear function in the time domain, then you would indeed get
$$\begin{align}
G(s) &= 1\\
&\Updownarrow \mathcal{L}^{-1}\{\}\\
g(t) &= \delta(t)
\end{align}$$
Which has the property:
$$f(t) * g(t) = f(t)$$ |
H: Why are the resistors in an AM modulator necessary?
In preparation for the amateur radio exam, I have been following the preparation course material by DJ4UF. In the section about modulation it presents this diagram of a simplified AM modulator.
The low frequency signal (NF) and the high frequency carrier wave (HF) are added, the diode cuts off one half-wave and the resonant circuit "recreates" the previously cut off half-wave. The result is a amplitude-modulated signal. So far so good.
What is the purpose of the 47k resistors? The explanatory text mentions that they are necessary to "add the currents in the diode". What would change if we remove them and just directly connect them to the diode?
AI: The concept here is that you are adding the signal currents before feeding the sum to the diode; the resistors are there to convert the voltage sources "NF" and "HF" to current sources.
You'd get exactly the same effect by adding the signal voltages directly — simply connect the "NF" and "HF" boxes in series, without any resistors. The only downside to this is that "NF" and "HF" can't share a common ground, and that's often a desirable feature of a practical system. But some AM transmitters isolate the NF signal with a transformer, which solves that problem.
Note that the circuit as given is not at all practical — you would not want to feed a parallel-tuned circuit, which has a high impedance at resonance, from a current source. Instead, you would use a series-tuned circuit that keeps the diode cathode close to ground potential. |
H: Frequency-to-Voltage convertor diagram questions
I'm relatively new to electronics, but I'm interested in making the Frequency-to-Voltage convertor linked here as part of a larger project linked here.
Here is a diagram of the circuit I want to make:
(Image source: Math Encounters Blog)
Whilst the diagram seems clear enough for me in most places, there were a few things that I would really appreciate if someone could clarify for me:
What is the .tran 10 transistor referred to at the bottom of the diagram?
In the "Larger Project" I linked above, I assume that the positive and negative of the audio signal would go correspondingly to the positive and negative of the "Sensor Interface Output" part of the diagram; is this correct?
I assume that the "Supply Voltage" is, as it hints, a constant voltage needed to supply the circuit. If so, what would this voltage be? 5V? 3.3V?
I would be very grateful if someone could give me some guidance regarding the 3 questions I posed above, as I am new to this sort of electronics.
Thank you in advance.
AI: 1- The diagram you used was drawn in a simulation software. You need to specify a ground reference node in most simulation softwares, that means that you need to tell the software which node has zero potential or voltage. the triangle you see at the bottom of the diagram shows that the node it is connected to has zero potential. It is not a transistor or any other component. .tran 10 as Felthry said in the comments is a way of telling the software to perform simulation for 10 seconds. you don't need to worry about that niether.
2- Basically when the sonsors output is greater than 12.7 volts the diode and transistor conduct and output voltage gradually goes up to 12v. when the sensors output is lower than that the transistor turns off and the capacitor in the output starts discharging gradually to zero. I guess the gradual rate of the changing output plays an important role in this application. This rate is determined by the values of the resistor and the capacitor in the output. I'm not sure what you are asking but I hope this answer helps you understand what happens in the circuit and figure it out by yourself.
3- I believe you got that, its 12v. |
H: How these two diodes used in the ciruit protect from the voltage spikes?
I'm playing around with a simple DIY project, and after about three days of googling and reading stuff I still can't wrap my brain around two Schottkys below:
From the project page:
The two Schottky barrier diodes act as clamps ensuring that the spikes
generated by the coil do not exceed the working limits of the PIC
processor.
I found a few articles on the internet in which diodes are shown connected like this. For example, there's a good one (Protecting Inputs in Digital Electronics) on DigiKey:
In my case the voltage on cathode normally never goes below zero. The spikes produced by the coil are (way) above zero volts too. (Though, I cannot prove nor deny the last statement as I don't have access to an oscilloscope.) So the diode should always be reverse biased.
The only way I can think the diode would be useful here is when the breakdown voltage is reached (which is around 30 volts for 1N5819) effectively shorting the coil to GND. And since the coil discharges quickly, the spikes are non-repetitive and will not cause damage to the diode, provided the current is small (R2 and R4 keep it down to 32 milliamps tops, if my calculations are correct).
In another project which is just a port of the original one, 1N4001 is used, which has a slightly higher breakdown voltage of 50 volts compared to Schottky.
Just before I was about to post my question, I was reading about breakdown voltage and happened upon information about avalanche diodes (the article Diode on Wikipedia):
At very large reverse bias, beyond the peak inverse voltage or PIV, a
process called reverse breakdown occurs that causes a large increase
in current (i.e., a large number of electrons and holes are created
at, and move away from the p–n junction) that usually damages the
device permanently. The avalanche diode is deliberately designed for
use in that manner.
So my questions are:
Can both diodes be used interchangeably in this particular project?
What are they used for in the circuit?
It seems that those are not avalanche diodes. Can these diodes still be used instead of avalanche diodes as long as the spikes duration is short and their current is small?
From the DigiKey article: An important aspect of microcontroller inputs (and the vast majority of any logic ICs) that was left out of the simple model shown in Figure 3 is that they have internal protection diodes that are used to protect the inputs, as shown in Figure 5. These normally forward bias at 0.7 V. According to the datasheets, both PIC12F683 and ATtiny25/ATtiny45/ATtiny85 have protection diodes to both VDD and VSS. Can we make do with just internal diodes then?
PIC datasheet in section 8.2 warns: A simplified circuit for an analog input is shown in Figure 8-3. Since the analog input pins share their connection with a digital input, they have reverse biased ESD protection diodes to VDD and VSS. The analog input, therefore, must be between VSS and VDD. If the input voltage deviates from this range by more than 0.6V in either direction, one of the diodes is forward biased and a latch-up may occur. Will D1, D2 help to prevent a latch-up from occurring? I suspect that internal diodes would end up forward biased way sooner than D1, D2 breakdown voltage is reached and they kick in.
I would appreciate any input.
AI: If the IO is sequenced correctly, that protection scheme should work. Basically if the "inactive" pin is held at ground, and the "active" pin is pulsed high and then either released to high impedance or just pulsed high-low, the coil is going to try to pull the "active" pin below ground -- and the Schottkey diode will do it's job.
Since it's a DIY project, you can always change the protection scheme to please yourself. I'd probably use a simplified version of the DigiKey schematic, with the Schottkey pair at the micro pin, then a series resistor to the coil. If you really want to be paranoid, use four transistors off board of the micro and drive them with four pins, scheduled to make a nice H bridge. |
H: DRAM in a FPGA - Image Sensor design
I am designing the simplest form of a camera: an FPGA that interfaces an image sensor and sends the acquired data to the host device via USB 3.0. There is no processing on the FPGA, it is used for sensor control and data acquisition.
By studying various resources online, I observed that typically a DRAM is included in the design to store sequential frames as well as intermediate processing results. At first, my plan was to implement:
Image Sensor ==> FPGA ==> DRAM (Acquisition)
DRAM==>FPGA==>USB Controller (Transfer to host)
Since no image processing is done, I was wondering if it is safe to omit the DRAM in the design for reducing design complexity. So, the pixel data will go directly from the FPGA to the USB Controller in every cycle, and implementing a small internal RAM as a very small pixel buffer in case there is a temporary stall from the USB controller.
Is that feasible or I will have many frame losses due to transfer bottlenecks?
AI: The RAM has a buffer functionality. This way you can send data bursts instead of trying to synchronize your incoming and outgoing datastream. Depending on the protocol you might have some significant overhead so you want to have as much data as possible packed in a single frame. Using a RAM buffer you can check if data has been stored and then build your frame and the camera doesn't directly rely on a ready USB connection.
So technically you can omit the RAM but I'd at least implement a fifo-buffer (ringbuffer) in the FPGA. |
H: Design emission (harmonics)
I attached a image of ESP32's datasheet where it say to add a series 470 ohm resistor to Tx transmission line. I wonder is it talking about which line, what are the pins?
And why that specific value?
here Is the link to the ESP32 datasheet https://www.espressif.com/sites/default/files/documentation/ESP32_FAQs__EN.pdf
AI: If you know anything about impedance dividers and the transmission line inductance, capacitance and thus Zo, this is a low Q LPF, easily simulated. For say 220 ohm UTP. Approx. 1~2uH/m & 35pF?? /m. Depending on cable.
DO you know Zc(f)=1/(2pif*C) and when series R= Zc(f) it is at half power and when R=9Zc(f) it is Zc/(Zc+9R)= 1/10th the input V or 20dB down or -20dB. |
H: How does the Zoom ADC exactly zoom in (Practically)?
I am trying to understand the actual working of this specific Zoom ADC (diagram below), which is a 2-Step ADC with coarse (SAR) ADC and a fine (Sigma-Delta) ADC. I understand it theoritically but could not think practically on how it exactly zooms. I could write down step by step in brief on what i understood and where i got stuck (highlighted in bold).
I am referring to the below architecture:
As per my understanding, it operates as follows:
The ADC in the forward path called a coarse ADC samples the input to a few MSB.
Then the output of the coarse ADC (Y coarse) is used to adjust the references of the DAC.
Then the sigma delta loop turns on and then a fine conversion (the remaining bits) are found using the usual Delta Sigma modulation and a decimation.
In a high level, the coarse ADC sets some reference and the fine ADC kind of wiggles small around the reference until it reaches the correct average value.
Conceptually, it looks like this:
I understand the working of a normal single bit sigma delta loop where the feedback DAC outputs a 0 or 1 (VCM or Vref) depending on the output of comparator. But I could not think on how changing this Vref to a different value would zoom.
If anybody could guide on how to think this zooming process practically, it would be helpful.
AI: The way that diagram indicates it works it that the SAR gets the rough value +/- 1 bit of the coarse value, then subtracts that from the signal to create a remainder. That is then fed to the Delta Sigma whose range spans just 2 bits worth of the coarse range, and adds the remaining less significant bits to the total result. The zoom I think is the implication that the second ADC has a much smaller span than the first.
Practically, it might be done by adjusting the upper and lower limits of the second ADC rather than the subtraction, but the end result would be the same. The issue with this arrangement is that the coarse ADC needs linearity far better than its resolution would demand alone for the additional bits of the second ADC to have any precision. |
H: Analyzing a constant current source
I'm looking at some old chemistry education literature where a constant current source was proposed (J. Chem. Ed. 1969, 46(9) p613, link). I am trying to better understand the circuit and could use some guidance.
simulate this circuit – Schematic created using CircuitLab
Question #1 I am trying to do an KVL/KCL analysis and am getting stuck. I have the following set of equations, where i1 is the current through the battery, i3 is the current through RS and i5 is the current through RP2. (The remaining symbols are, I think, self explanatory).
eqns = {
v - ic rl - vce - ie re == 0,
v - i3 rs - i3 rp1 - i5 (rp - rp1) == 0,
ie == ib + ic,
i1 == ic + i3,
i1 == i5 + ie,
i3 == i5 + ib,
\[Beta] == ic / ib
};
I believe I am missing one more relationship. It's possible, I think, to create one more loop which goes through the base and emitter of the transistor; however, I have to assume that Vbe is equal to 0.7 and I do not think this is correct.
Question #1 What is the recommended set of equations to analyze this circuit?
Question #2 I do not understand the role of the zener diode in this circuit. Removing it from the simulation does not appear to have an effect.
Note:
While I used CircuitLab for the schematic drawing, I've used this simulator for analyzing the circuit. The circuit can be loaded into the simulator by importing the following text:
$ 1 0.000005 10.20027730826997 50 5 50
t 288 224 288 272 0 1 -5.916033570237703 0.6317620685748468 100
w 304 272 352 272 0
r 352 272 352 176 0 100
w 352 128 208 128 0
r 272 272 208 272 0 500
w 208 224 288 224 0
r 208 176 208 128 0 1500
z 144 272 144 176 1 0.805904783 6.1
w 144 176 208 176 0
w 144 272 208 272 0
v 80 272 80 128 0 0 40 9 0 0 0.5
w 80 272 144 272 0
w 80 128 208 128 0
370 352 128 352 176 1 0
p 352 320 272 320 1 0
w 352 320 352 272 0
w 272 320 272 272 0
r 208 176 208 224 0 250
r 208 224 208 272 0 750
38 2 0 1 10000 Load
38 4 0 1 1000 Re
38 6 0 1000 10000 RS
Another Note
I noticed after posting this question and receiving an answer to it that this circuit is only a slight modification from what is shown in Chapter 2.06 of The Art of Electronics (2nd edition). Future readers of this Q&A are referred to that text for more information on current-source biasing and compliance.
AI: Quick analysis of this circuit
First, one must do some assumptions which stems from the art of electronic engineering.
One "verify" afterward that there no contradiction between the results and the assumptions.
These assumptions are :
The current in the base of the transistor is so low that the voltage at the point common between rp1 and rp2 is the voltage without load.
Vbe = 0.7 Volts
Iemitter = Icollector
Then :
Vbase = VBat ( rp1 + rp2)/(rp1 + rp2+ rs) = 3.6 2.7 Volts (the Zener has no effect, see my comment)
Vemitter = 3.6 2.7 - 0.7 = 2.9 2.0 Volts
Iemitter = 2.9 2.0/Re = 5.8 4 mA
Icollector = 5.8 4 mA
Let's see if there are no contradiction with the initial assumptions :
Ibase = ICollector/Beta =
Icollector /(Something more than 40 according to the datasheet of the transistor)
< 0.145 0.100 mA
Ibase comes from some equivalent resistor which is less than 750 Ohms. This implies that the voltage drop is no more than 750 Omhs * 0.145 0.100 mA = 0.108 0.075 Volts
0.108 0.075 Volts can be considered as negligible compared with 0.7 Volts.
Concerning Vbe. This voltage is the voltage that would develop a diode. This voltage is 0.12 Log[10,Idiode / 10^-10] = 0.7 Volts. Note that this value is very insensitive to Idiode because of the Log
Iemitter = Icollector is true if Vce > 1 Volt (ie the transistor works in is active region). Here we have Vce = VBatt - 100 5.8 4 mA - 2.9 2.0 = 5.5 6.6 Volts
Solution of the problem with the Wolfram Language (Mathematica)
rp1=250;
rp2=750;
rs=1500;
rl=100;
VBat=9;
re=500;
beta=40;
eqns = {
VBat - ic rl - vce - ie re == 0 ,
VBat - i3 rs - i3 rp1 - i5 rp2 == 0,
ie == ib + ic,
i1 == ic + i3,
(* i1 == i5 + ie, redundant with the line just above *)
i3 == i5 + ib,
beta == ic / ib,
i5 rp2 == ie re + 0.7 (* new *)
};
res00=Solve[eqns,{ic,vce,ie,i3,i5,i1,ib}][[1]];
res00 //Column
solution with a more refined model of the jonction base-emitter (Vbe~0.7Volts)
eqns01 = {
VBat - ic rl - vce - ie re == 0 ,
VBat - i3 rs - i3 rp1 - i5 rp2 == 0,
ie == ib + ic,
i1 == ic + i3,
(* i1 == i5 + ie, redundant with the line just above *)
i3 == i5 + ib,
beta == ic / ib,
i5 rp2 == ie re + 0.12 Log[10,ib / 10^-10] (* a more refined model *)
};
res01=FindRoot[Evaluate @ eqns01,Evaluate[List @@@ res00]]
res01 //Column |
H: TLC5940 with 200 LEDs from car battery
I'm working on project that involves 100 white and 100 RGB LEDs, controlled with 25 TLC5940s.
I've managed to connect three TLCs to an Arduino MEGAv3 and so far, so good.
I've found out that I have to add 0.1 μF capacitors (ceramic, but I can only get film) between TLCs VCC and GND to smooth out voltage drops and 1 μF / 100 μF caps along the positive and negative rails (to not burn down TLCs). I'm using these LEDs: RGB and white.
So, from the TLC5940 basic use example, I calculated that I have to use a 1,5 kΩ resistor on each TLC to provide 26,04 mA per LED channel. So at full on, all this should pull 26.04x400=10.416 A!
As this is going to be freestanding installation I'm thinking of using a 12 V, 105 Ah car battery (not connected to a car) with this buck converter: DC/DC 12 V-5 V 15 A 75 W
My questions, that I haven't found answers to, are:
How to calculate the resistors for the TLCs? Or are none needed?
How to calculate the resistors for the LEDs? It's stated I = 39.06/R(in ohms) (I don't know why V is 39,06), so
R = (Vc-Vled)/ILed = (5-3,4)/0,03 = 53 Ω?
How to protect the TLCs from any possible overheating that I'm not aware of?
Are my LED current calculations correct? As I measured with a single LED turned on between an Arduino 5 V pin and the breadboard it showed ~36 mA on my multimeter.
I have attached a sketch below. It's only showing two TLCs, but this wiring will continue as shown.
UPDATE:
This is my scheme as it is now on breadboard.
I went with MeanWell 12->5 V converter after consultations in the Arduino forum. I cannot take risks of running out of voltage because of voltage drop in system.
I changed my white LEDs to a more efficient model and added SN74HC04N inverters for signal strength.
I hope schematic is somewhat understandable. It actually works as is and I plan to start soldering this to veroboards soon. There are definitely improvements to be made that I'm not aware of and maybe someone can point out some?
UPDATE 2023:
Final result from 2019 :) Thank you, everyone!
https://www.youtube.com/watch?v=T-OHApAF-7o
https://www.youtube.com/watch?v=QV-e0ULcf7c
AI: For a battery powered project you need to look at all the power you are wasting.
You should be able to do this with a 3.3V power supply.
There are plenty of white LEDs that have a lower forward voltage and much brighter than the one you link. The RGB link did not work.
You can get white LEDs with and luminous intensity of 46,000 mcd.
You could reduce the current by a factor of 8 (2.5 mA) and greatly reduce the load on the battery with the same brightness.
Here are some example of brighter white LEDs: DigiKey white LEDs
At 3.3V you can save an additional 33% for the voltage reduction.
If you reduced the current to 2.5 mA you will lower the forward voltage of the LED this will save a little energy.
You can reduce your battery capacity by more than 10x by reducing the voltage and and lowering the current.
This will also keep the TLC5960 from getting too hot.
How to calculate resistors for TLCs? Or are there not needed any..?
The TLC 5940 provides a constant current so no resistor is needed.
And how to calculate resistors for LED?
It's stated I = 39.06/R(in ohms)
(i don't know why U is 39,06..)
U = (Vc-Vled)/ILed => (5-3,4)/0,03 = 53ohm ?
Not needed.
How to protect TLCs from any possible overheating that i'm not aware of ?
Not likely needed, if heat were a problem you would reduce the current.
Are my LED mA calculations correct ? as i measured with single led
turned on, between arduino 5V pin and breadboard it shown ~36mA on
multimeter. I have attached sketch below, it's only showing two TLCs,
but this wiring will continue as shown.
You only need one resistor, RIREF for each TLV5940. The value is calculated using the formula in Section 8.3.7 of the datasheet. The resistor sets the maximum current for all 16 LEDs.
Notice there are no LED resistors to limit the current.
RGB
One thing you need to understand is you cannot just look at the mcd rating. You must also consider the view angle.
The mcd is the intensity of the light beam being emitted. The view angle is the size of the beam. The two together is how much light (luminous flux, i.e. lumens) is being emitted.
Your green LED is 14400 mcd (14.4 candela)n @ 30° = 3 lumens
Another green might have only 7200 mcd (7.2 cd) but a view angle of 60°.
How do these compare?
Is yours twice as bright? Yes and no.
Depends upon the angle you view the LED from.
If you compare the amount of light being emitted the difference is 2x.
Except it's the 7.2 cd emitting twice as much light as the 14.4 cd LED.
The 7.2 cd @ 60° = 6 lumens
But if you view the LED straight on at 0° yours is twice as bright.
It's about how the emitted light is optically directed by the shape of the LED.
If viewing the LED from a 30° yours cannot be seen. While at 30° the 7.2 cd LED will be seen at 50% intensity, or equivalent to a 3.6 cd LED.
You RGB looks very good. The view angle is a little small but if it's to be viewed straight on, very good.
Below is a graph of your LED's spacial radiation (direction of light).
The arc by the highlighted 0.5 is the 50% intensity point.
At 15° (one half of the 30°) the intensity is at 50%. |
H: High AC voltage on Touch Sensitive Switch
I recently got a 120V/240V On/Off Touch Switch kit (ZIJIA P12-L56). They are a cheap way of adding Touch Lamp functionality to a standard lamp. It works great and I immediately started poking around inside it. While doing this I noticed that when I used my multimeter to measure the voltage between the touch switch and ground it read at ~99.8VAC.
That voltage differential seemed very large to me and definitely something I should feel when I touch the switch.
Why don't I feel a shock from a relatively high voltage?
I don't have a full diagram of the circuit inside but the Switch lead comes directly off the center pin of a Mosfet.
AI: You haven't provided a link or a schematic but the general scheme of things should be similar to that of Figure 1.
Figure 1. A touch dimmer circuit. (I have used this on a previous answer and have lost the source reference. It is drawn in the unique style of Elektor which I subscribed to over many decades and still do.)
Note that this has 9.4 MΩ resistance between the touch pad and the chip input and another 4.7 MΩ resistor pulling that to the '0 V' line. (The whole circuit must be treated as live. The '0 V' is just the reference point from which all voltage measurements are taken.)
The 9 MΩ is very close to the usual 10 MΩ input impedance of most digital multimeters. That means that if pin 5 is close to mains voltage you would expect to measure about half of that on your multimeter.
The worst case current from the touch pad would be \$ \frac {V}{R} = \frac {230}{9M} = 25\ \mathrm{\mu A} \$. This is too low to cause feel.
In normal operation the input is sensitive enough to detect this small current when the circuit is coupled to ground through the capacitance of your body when you touch the switch. |
H: Diode power consumption
I need to convert from 4 to 3.3V. So I was thinking of a diode, or voltage regulator. Voltage regulators, I know that it uses about 5mA to do the job.
How much current a diode uses ? I cant find this information...
[sorry I deleted the former question, was wrong 3.3 not 4.4]
AI: I need to convert from 4 to 3.3V. So I was thinking of a diode, ...
Presumably you are considering using the forward voltage drop, Vf of a silicon diode placed in series with the load to drop about 0.7 V to bring your voltage down.
simulate this circuit – Schematic created using CircuitLab
Figure 1. Using a diode to reduce supply voltage.
This works but you need to be aware of a few things:
It is not a regulator. It just drops voltage and the voltage drop depends on the current. At more than a few mA it typically drops about 0.7 V but at lower currents the voltage drop will decrease. The I-V curve for the diode will help you calculate this.
If V1 increases or decreases the the load voltage will increase or decrease with it. Again, it's not a regulator.
The fact that you're looking for 3.3 V suggests that you want to power a micro-controller. You need a regulator.
... or voltage regulator. Voltage regulators, I know that it uses about 5 mA to do the job.
Regulators do require some current to operate.
Regulators also require some "headroom" - the difference between the input and output voltage - to operate. If you are thinking of a 78xx series regulator then these need about 2 V or so. You don't have that much headroom so you would be looking for a "low drop-out" (LDO) regulator.
How much current a diode uses?
A diode doesn't "use" current. It passes current but reduces the voltage. The power dissipated in the diode can be calculated from \$ P = VI = 0.7 \times I \$. |
H: Powering a circuit from either USB or LIPO
I have a small hobby project which includes MCU and some peripherals.
It is powered by 3.3V power source and consumes 200mA-800mA for up to a minute. However, most of the time it's in deep sleep mode where is consumes very little current (as low as 5 µA, hopefully) in order for it to last longer when powered by battery.
Now I would like to power my project by USB when USB is connected, or by one cell LIPO battery when USB is disconnected. When USB is connected I would like it to both power the circuit and charge the LIPO.
Here is the high level scheme I had in mind:
simulate this circuit – Schematic created using CircuitLab
Since USB voltage (5V) is always higher than LIPO voltage, D1 and D2 make sure that when USB is connected the LIPO is disconnected from the DC converter, and when USB is disconnected the LIPO charger is disconnected.
My questions are:
Does my high level scheme make sense? Any special consideration for selecting D1 and D2?
My main concern is how to select the DC converter.
Input voltage ranges from 3V to 5V although I can probably live with the converter shutting down when its input is below 3.3V as most of the LIPO power is consumed before it gets lower.
I'm looking for a high efficiency converter that can handle efficiently both high currents and very low currents, in order to make the battery last longer.
Would a step down converter be a better choice than LDO? What parameters should I look for in the data sheet?
AI: Your thinking goes in right direction but the choice of components could be better. The task of charging single lithium battery from USB and powering system load at the same time is so common nowadays that there are literally dozens of chips designed exactly for this purpose.
What you are looking for is called "Power Path", and it is a technology that allows input power distribution between charging and system load, powering system from the battery when external adapter disconnected and even boosting external power with battery when system draws more current than available from adapter/USB.
Note, however, that while they all have either LDO or switching buck to get input voltage down to battery charging voltage, they do not have another one for 3.3 V. So, they usually output either regulated 4.2 V or direct cell voltage. Considering your 800 mA maximum system load I'd recommend using buck converter rather than LDO. |
H: What kind of device in this circuit
It's a power entry part of a DAQ card, the bottom right is a DC jack. The power flow maybe:
Power DC jack -> ferrite bead -> cap -> common mode inductor -> cap -> ?
I guess the bigger black one may be a common mode inductor too. But I've never saw such a bigger one in this kind of package, and it seems it's full shielded, can't see the coils from side.
May anyone give some clues?
AI: Both the black doodads appear to be common-mode chokes. Really looks like ferrite rather than a molded semiconductor package. Test with a small magnet to confirm. Ferrite should be strongly attracted, otherwise it would just be the leadframe which isn't much metal.
So a bunch of filtering to keep hash from the switchmode power supply from getting out the power port (that black thing on the lower right is a barrel jack typically used for power).
There might be an option for a larger CM choke rather than the tiny one, judging by those unused pads.
Edit: Here's what I see. Mostly 100% sure, C5 is a bit iffy. The leftmost chip is probably a SMPS chip- looks like a diode below it, and probably an inductor out of shot above it. L1 and L2 are ferrite beads. |
H: What's that metal wounded filament sponge looking thing and what is it used for?
I'm new to soldering and ordered a cheap starter kit. It included this:
I have no idea what this is. Any hint?
AI: Cleaning a soldering iron tip
It usually sits in a metal dish, and you push the soldering iron into it and wiggle it around to clean the tip. It's an alternative to a damp sponge, which always used to be built into soldering iron stands.
You should clean the tip just before use if it has darkened. Then put it back in the stand with the solder on after making a joint, and clean again if necessary before the next joint. That way you leave a drop of solder on the tip when it is in the stand, which slows down oxidation. |
H: Have I bricked my ATMEGA3328p by setting
I was playing around with setting the "system clock prescalar" (CLKPR).
I'm now in a situation where I can see that the clock is 244.9KHz, using my logic analyser and the "Clock output on PORTB0" fuse. (I am also using the /8 fuse.)
My code is still running on the ATMEGA, even after powering off/on.
AVRdude will no longer let me write to it, either to the flash or the to the fuses.
$ avrdude -p atmega328p -c avrispmkII -P usb -U flash:w:main.hex -F
avrdude: stk500v2_command(): command failed
avrdude: stk500v2_program_enable(): bad AVRISPmkII connection status: Unknown status 0x00
avrdude: initialization failed, rc=-1
avrdude: AVR device initialized and ready to accept instructions
avrdude: Device signature = 0x88ab81
avrdude: Expected signature for ATmega328P is 1E 95 0F
avrdude: NOTE: "flash" memory has been specified, an erase cycle will be performed
To disable this feature, specify the -D option.
avrdude done. Thank you.
Is there a failure mode where setting the AVR's clock too slow can make it unprogrammable?
The device's signature seems to have changed, it now randomly takes values which include:
0x888b02
0x88ab81
0x886bf8
0x888b02
I have already tried powering-off the ATMEGA and the AVR ISP MK-II.
Is there any way to fix this chip?
AI: Programming clock speed is limited by the MCUs clock. If the MCU runs on a clock too slow programming at default speed will fail (as you see).
You can lower the programming clock speed by using the -B switch in avrdude, provided that you programmer hardware supports it. |
H: Does a diode block current but not voltage?
When running this test circuit on CircuitLab and my expected result was that only M1 would open
but then i noticed that the M4 mosfet is also open even though there is a diode between it and the the 3V
the Gate threshold on both is set to 1.5v
i also noticed some strange spikes appearing on the current
Does it mean a diode blocks current but not voltage ?
simulate this circuit – Schematic created using CircuitLab
AI: Figure 3 from the 1N4148 datasheet.
Diodes have very small leakage current. At 3 V this will be between 3.5 and 10 nA.
The 2N7000 has a gate−body leakage current, Forward of -10 nA max.
The diode also has about 4 pF capacitance. When the supply voltage jumps from zero to +3 V on power-up the diode capacitance will cause the gate of M4 to jump up too. |
H: Digital Logic: What are "hamming code" and "Binary code" state machines?
I'm asked to draw the circuit for a state machine in one hot, hamming code and binary code models. I know what is one hot state machine, but i'm not sure what are the other 2. Google also didn't help. Any ideas?
AI: If you have a state machine with N states, there are a number of different ways to encode those states as binary logic.
One-hot encoding assigns one FF to each state, so it requires N FFs. Only one FF has the value 1 (is "hot") at a time. If at any time, more than one FF is 1, that's an error.
Binary encoding assigns sequential integers to the states, and they get encoded on \$\lceil\log_2 N\rceil\$ FFs as unsigned binary numbers.
Hamming encoding is similar to binary encoding, except that enough additional FFs are added so that the state assignments are Hamming codes that are capable of correcting single-bit errors and detecting double-bit errors. An error detector monitoring the state values can determine that an error has occurred and correct it. If the binary encoding requires \$M = \lceil\log_2 N\rceil\$ FFs, then the Hamming encoding requires an additional \$\lceil\log_2 M\rceil + 1\$ FFs. |
H: -20 db/decade Gain Slope requirement for Stability?
I have been learning Analog Filter designing. I started with Loop Compensation in power supply. I came across a document. It states that:
The requirement for stability is typically met if the overall gain crosses 0 dB with a slope of –20 dB/Decade.
I have read so far for stability, there should be enough phase and gain margin and poles and zeros should be located at the left-hand size of the s-plane.
But I am not able to understand the requirement at Gain Slope when gain crosses 0 dB. Can someone explain this to me with some graphical representation?
AI: It is correct that the stability criterion under these conditions (-20db slope at unity gain) is met - that means: fulfilled.
However, two important remarks:
In your quote, the "overall gain" is mentioned. This sounds misleading to me. To be correct, this stability criterion applies to the LOOP GAIN of a circuit with feedback only (that means: loop not closed).
The mentioned condition ensures a phase margin of app. 90 deg - which certainly fulfills the criterion. However, even with a phase margin of only 10 deg the circuit (after closing the loop) will be stable. But in many cases this margin is not sufficient (too much ringing in the step response). But it is "stable" from the system theory point of view. For many sysytems a margin of app. 60 deg is desired which gives a "good" step response with a small overshoot only.
(Further explanation: In most cases a gain drop with 20dB/dec is connected witha phase shift of app. -90 deg. Hence, we have a reserve of additional 90 deg until the "critical" value of -180deg). |
H: Is the ATMEGA328P's serial baud rate quantised?
I'm using an ATMEGA328p, running from its internal oscillator (divided by 8 = 1MHz).
I've (roughly) measured the oscillator output, using my Salae logic analyser, as ranging from 960KHz to 1000KHz, so it's not awful. I did this using the "Clock output on PORTB0" fuse.
If I set the baud rate to 9,600, it outputs serial at 10,220 baud. (Is this because I'm not using a crystal, or because of quantisation?)
If I either increase F_CPU or decrease USART_BAUDRATE, gradually, the output serial baud does not decrease, until it jumps to 8,800 baud.
#define USART_BAUDRATE 9600
#define BAUD_PRESCALE (((F_CPU / (USART_BAUDRATE * 16UL))) - 1)
int main(void) {
// serial port setup
UCSR0B |= (1 << RXEN0) | (1 << TXEN0);
UCSR0C |= (1 << UCSZ00) | (1 << UCSZ01);
UBRR0H = (BAUD_PRESCALE >> 8);
UBRR0L = BAUD_PRESCALE;
...
Is there some form of quantisation affecting the output baud?
P.S. I'm using GCC on Linux to compile code, and I'm not using Arduino code/IDE.
AI: As already mentioned, this quantization is fundamental to the architecture, which can only divide the source clock by an integer.
However, your problem is quite avoidable. The ATmega UART can operate in two modes; one where it needs a clock at 16x the baud rate, and another where the clock need only be 8x.
You are using the 16x mode, which means that you need an integer divisor of 62.5 KHz (1 MHz / 16) which will yield 9600 baud. That would be about 6.5 which, is not usable. If you instead divide by 6, you get a theoretical 10417 baud which is further from 9600 than desirable.
However, if you instead use 8x mode, now you can divide 125 KHz to 9600, something the integer 13 approximates very closely to yield 9615 baud.
So the real solution to your problem is to operate the UART in "doublespeed" mode and use 8 rather than 16 in the formula for calculating the divisor. Since the actual division is the programmed value plus one, you would write 12 into the divisor registers. |
H: Thevenine equivalent using loops
I'm trying to figure out the Thevenine resistance of this circuit using loops.
Perhaps loops is the wrong way to go? I've made the circuit using a breadboard and found out that the answer should be \$\frac{3R}{5}\$.
Using loops I get that
\$V_S-IR-i_1R=0\$ (1)
\$i_1R-2i_2R=0 \Leftrightarrow i_1=2i_2\$ (2)
And from looking at the current, I can see that \$I=i_1+i_2=2i_2+i_2=3i_2\$
Then I rewrite the first equation as
\$V_S=(i_1+i_2)R+i_1R=2i_1R+i_2R \Leftrightarrow 2i_1+i_2=\frac{V_S}{R}=2(2i_2)+i_2=5i_2\$
Solving for \$i_2\$, I get \$i_2=\frac{V_S}{5R}\$, and which gives me that
\$I=3i_2=\frac{3V_S}{5R}\$. Then using Ohms law, I get that \$R_{eq}=\frac{5R}{3}\$, which isn't correct.
Where do I go wrong? Is this an example where loops can't be used to determine the equivalent resistance?
Thanks in advance
AI: Everything is correct and your loops method work.
The current \$I = \frac{3 V_S}{5 R}\$ therefore the \$R_{Th}\$ resistane is:
$$R_{Th} = \frac{V_S}{I} = \frac{V_S}{\frac{3 V_S}{5 R}} = \frac{V_S}{1} \cdot \frac{5R}{3V_S} = \frac{5R}{3} $$
Ans this is the resistance seen from the input voltage source point of view
And this is what you have found because you solve for the \$I\$ current (drawn from the voltage source).
But we can also find the resistance seen between \$A\$ and \$B\$ terminals:
\$R_{th} = (0.5R+R)||R = 1.5R||R = 0.6R\$
But this resistance can be found as I have shown here:
Why is the voltage of a capacitor equal to the voltage of a battery connected it? |
H: Is this equation wrong?
I have a trouble in understanding one of the current equation in the following example:
https://www.circuitlab.com/textbook/dependent-source-feedback/
To me
i3 = (Va - Vb)/R2 which yields:
Va - Vb = 10*i3
But the example says:
Vb - Va = 10*i3
Am I wrong or the example?
AI: Remember that current, in the conventional way, flows from positive to negative, which means the current flows from the greater voltage to a lower one.
In this case, \$I_3\$ flows downwards, from \$V_B\$ to \$V_A\$, which means \$V_B\$ is thought to be greater than \$V_A\$ although in the end of the circuit´s resolution you´ll find that it may not be true. Then \$I_3\$ it´s supposed to be \$\frac{V_A-V_B}{R_2}\$. Anyway, for writing the equations, the link you provided brings a correct reasoning. |
H: LTC6994-2 signal inverted and time delayed
I use LTC6994-2 to delay and invert input signal. I want 2s delay between input and inverted output.
However, this is what I get on the output:
Green waveform is input waveform. Output (Blue waveform) is expected to be inverted and delayed version of input signal. However, in my circuit, output goes high at the start and stays that way no matter how input changes.
Datasheet of the component:
Figure 6 on pg14 is what I am trying to accomplish.
I followed steps on page 16 in datasheets to select values of resistors:
Ndiv selection:
tdelay/16u <= Ndiv <= tdelay/1u (equation 1) => 125K<=Ndiv<=2M
Selected lowest Ndiv to reduce power consumption => Ndiv = 262,144
I selected corresponding R11/R12 resistors as suggested in datasheet:
I selected Rset from equation given in Step 3 on pg 16 (equation 2):
tdelay = (Ndiv * Rset/50K)*1u
Taking 2s delay and 262,144 Ndiv, Rset = 381.5K
Is there anything that I am doing blatantly wrong here?
AI: That is the correct functionality, per page 14 of the datasheet:
If the input doesn’t remain high or low long enough for
OUT to follow, the timing will restart on the next transition.
Also unlike the LTC6994-1, the output pulse width can
never be less than tDELAY. Therefore, the LTC6994-2 can
generate pulses with a defined minimum width.
Because the delay timer is reset on each edge transition, the LTC6994-2 is also operating as a pulse discriminator. For a valid pulse to be transmitted, the minimum pulse width must be at least 1 tdelay. |
H: Basic Ideal Op Amp Circuit Analysis
I am trying to familiarize myself with operational amplifiers by going through this book by Sergio Franco.
This is a basic question regarding the example given on page 16, which I am struggling to understand.
According to the explanation, V3 is equal to -6V.
I know that when operated with negative feedback, an ideal op amp will output whatever voltage and current it takes to drive VD to zero, or equivalently, to force VN to track VP.
The top half of the circuit looks like an inverting amplifier circuit, with the bottom half leaving Vp (Voltage at positive input) at 0:
What am I doing wrong here?
AI: Let's perform the calculations to verify the explanation.
Since \$v1=v2\$ the 30kΩ resistor has the entire 6V from the generator across it, hence the current:
$$
i=\frac{6V}{30k\Omega}=0.2mA
$$
Since the opamp inputs (ideally) draw no current, this is the same current flowing in the 10kΩ resistor. Therefore, because the current direction is from ground to \$v1\$, we have:
$$
v_1 = - i \cdot 10k\Omega = -0.2mA \cdot 10k\Omega = -2V
$$
We already know that \$v1=v2=-2V\$ because of the opamp action (virtual short between inputs) due to negative feedback. Moreover the same current \$i\$ flows in the 20kΩ resistor, hence we can write:
$$
v_3 = v_2 - 20k\Omega \cdot i
= -2V - 20k\Omega \cdot 0.2mA
= -2V - 4V = -6V
$$
In the end your conclusion is right. I hope this explicit calculation procedure dissipates any residual doubt you have. |
H: Normally On Circuit with Positive Trigger?
Pardon my lack of technical terms here; essentially I have an under glow kit on my car (along with some other lighting) that I want to expand the circuit on. I bought a pair of 5 pin relays to accomplish my task. Essentially the running board lights will constantly be on, until I use my turn signals in which case the appropriate running board light will blink like my turn signal.
The lights have four wires, a positive, and three negatives that create the colors.
This is the relay I bought:
Here is the diagram for the normally off circuit they have:
Here is the wiring harness:
And here is the diagram I came up with to accomplish my task. As I stated before, I’m not great with technical terms so please pardon that.
simulate this circuit – Schematic created using CircuitLab
Now, for testing I’ve scaled this down to just the relay and a single light; for some reason, I can’t seem to get the light to come on even with the turn signal on. When I attach the light straight to the battery, it comes on.
I feel like I’m just doing something wrong and can’t figure it out.
For clarity, the goal is to have the light on when the turn signal is off, and blink when the turn signal is on.
Also, if the above isn’t possible, even the normally off diagram didn’t work; how can I correct to achieve it?
AI: This is a simple schematic, for alternative blinking. Turn bulb on, your led module off. Once the turn signal stops blinking, the running board would stay on, as long as your Switch is on. Side note, I hope you added a fuse to your wiring between the battery and switch. Safety first.
simulate this circuit – Schematic created using CircuitLab
The turn signal power must come from before the turn bulb, somewhere from the red circle, or a similar connection. (Wiring diagram for some unknown year of Mitsubishi Lancer, see page 9-70) Just tap/slice into it. |
H: Dc motor as torque transducer for BLDC motor
I bought a cheap BLDC motor on some Chinese website and could not find its characteristics. However, I have A28-400 24V amp flow Dc motor with its characteristics. If I'm able to build a test rig to load the BLDC by connecting the shaft of BLDC motor to that of Dc motor and power up the BLDC using a commercial controller and measure current from DC motor (acting as a generator with a rheostat in series to vary load torque) and try to map values of current to shaft torque of Dc motor using its chart.
How accurate and reliable is this method to measure torque assuming zero slippage at the coupling of the motor shafts?
And I'll be loading BLDC motor till it stalls by gradually increasing its power and decreasing resistance on the other side.
AI: My gut feel is you'll get it within 20% easily, 10% if you properly account for friction in the DC motor -- and without some independent way of verifying, you'll never know for sure. |
H: Use cell phone batteries as a battery pack
In my circuit, I used a simple charger circuit that works fine with standard battery packs that come with an overcharge/discharge protection circuit like this (green PCB inside.)
But I have some restrictions in my case and needed to use shaped batteries and found cell phone batteries like this
Currently, there are many batteries from different companies which I can find many good shapes and capacities. I have two questions in general:
Are they as good as these standard batteries, I mean in life quality and capacity?
Do these batteries come with charge/discharge protection circuits? If it varies how can I check them because most of them cannot be opened easily.
AI: Cellphone batteries are optimized for safety, capacity, and cycle count performance. In that area they are pretty good, since if there were a better technology or chemistry, manufacturers would have used that and presented it as a competitive advantage.
I've seen both, so it varies. Testing is not too hard, but it can be somewhat time-consuming.
Discharge the battery to say 2.8 volts (via a power resistor for example) and continue monitoring its voltage as it winds down even further. If at some point the voltage goes to 0V abruptly, then it's because of the protection circuit. I'd discharge it to 2.4 and if it's still not 0V, it's likely an unprotected cell. This assumes that the overdischarge threshold is somewhere in 2.5-2.8 volts, but the exact value depends on the protection IC used, and can be lower than that, as this thread indicates (2.0V on the BL-5C). I do not recommend that you discharge to less than 2.0V "just to be sure", this will be almost sure to permanently damage the cell. |
H: Finding the supply voltage, knowing the resistance and current
I am learning the basics of electrotechnics. I had this task for my university that I failed doing and I want to learn how to solve it anyway. Thanks for assistance! :)
Here is the exercise task:
I assume I need to find the \$U\$. Knowing the \$I = 14 A\$ and \$R = 8 Ω\$
This is how I calculated this:
\$ U = IR_{z} \$ where \$ R_{z} = R_{123456}\$ (equivalent resistance)
\$ R_{12} = R_{1} + R_{2} = 4 Ω + 4 Ω = 8 Ω \$
\$ R_{123} = \frac{R_{12}R_{3}}{R_{12}+R_{3}} = \frac{8 Ω * 8 Ω}{8 Ω + 8 Ω} = 4 Ω\$
\$ R_{1234} = R_{123} + R_{4} = 4 Ω + 4 Ω = 8 Ω \$
\$ R_{12345} = \frac{R_{1234}R_{5}}{R_{1234}+R_{5}} = \frac{8 Ω * 8 Ω}{8 Ω + 8 Ω} = 4 Ω\$
\$ R_{123456} = R_{12345} + R_{6} = 4 Ω + 4 Ω = 8 Ω \$
\$ R_{z} = R_{123456} = 8 Ω \$
\$ U = IR_{z} = 14 A * 8 Ω = 112 V \$
Yet somehow 112 V is wrong answer. Any clues, hints? Where did I do an error?
AI: You calculated the voltage so that the total current provided by the supply is 14 A. But not all of that current goes through the branch identified by the "I" in the schematic. So now you need to figure out what fraction of the total current is I, and scale your input voltage appropriately to get 14 A at "I". |
H: Finding power absorbed by resistor
Data:
\$ J = 1 A\$
\$ R = 2Ω\$
\$ R_{1} = 4Ω\$
\$ R_{2} = 6Ω\$
\$ k = 16 \$ - times
Calculations:
\$ J' = kJ = 16 *1A = 16A\$
\$ J = I_{1}+I_{2}\$
\$ J \frac{R_{1}R_{2}}{R_{1}+R_{2}} = I_{2}R_{2}\$
\$ I_{2}= J \frac{R_{1}R_{2}}{(R_{1}+R_{2})R_{2}}\$
\$ I_{2}= J \frac{R_{1}}{R_{1}+R_{2}}\$
\$P_{2} = I_{2}^{2}R_{2} = (J \frac{R_{1}}{R_{1}+R_{2}})^{2}R_{2} \$
\$P'_{2} = (J' \frac{R_{1}}{R_{1}+R_{2}})^{2}R_{2} \$
\$P'_{2} - P_{2} = (J' \frac{R_{1}}{R_{1}+R_{2}})^{2}R_{2} - (J \frac{R_{1}}{R_{1}+R_{2}})^{2}R_{2} = (J'^{2} - J^{2})(\frac{R_{1}}{R_{1}+R_{2}})^{2}R_{2}\$
Not sure how to continue, but anyone sees some obvious mistakes that I've made so far? Thanks
AI: You're making this way too complicated.
simulate this circuit – Schematic created using CircuitLab
Figure 1. First step ...
Combine R1 and R2. Now you can work out the voltage across 'R5'. Once you have that you can work out the power in R2 using the formula that relates P, V and R. Do you know it?
Note that the question as written can be solved in one step. Since \$ P = I^2R \$ then if the current goes up by a factor of 16 what happens the power? I would then say that the increment in power is that number minus 1. |
H: Double BJT circuit with one BJT off and one in saturation
In the circuit below, due to the base voltage, the circuit becomes a common-emitter setup. However, assuming a reasonable value for the base current the BJT should be in saturation.
However the value given for β is a βmin value (βmin = 30), which only holds for active forward mode. I attempted to calculate the βforced value by assuming an Isc/Is value of 100, which yielded βforced = 14.45. However using this value with KVL from B to E gives 10 - 10000ib - 0.7 - 1000(βforced + 1)ib = 0 yields an ib value of 0.365 mA and an ic value of 5.3 mA, which is different from the value of ic given in the book (4.35 mA).
What am I doing wrong here? It seems like it should be a straightforward question but I must be doing something wrong.
AI: You are overthinking this.
The emitter current is 4.8mA using your 200mV drop for a transistor in saturation. The emitter voltage is 4.8V.
The base current is (10V-(4.8V+0.7V))/10K = 450uA
The sum of all three currents flowing into the transistor must be ?? |
H: Choice of wire diameter for doll's house?
I'm constructing a 1:12 scale doll's house and I'd like to use enamelled wire run within the walls for lighting. Each room would have its own circuit controlled through a switch and then to a common 12V supply.
Some of the lamps will be LED and some will use filament type bulbs (depending on what's available). Maximum number of lights per circuit would be something like 6 eg. 2 wall lamps, 2 ceiling lamps and perhaps a table lamp or two. They'll all be wired in parallel.
I chose solderable enamelled wire as it's single core and easy to solder to miniature fittings. There also won't be a problem with insulation shrinking back when soldering. The conductors will be lightly twisted, run in a 3mm channel cut into the MDF walls and 'back filled' with plaster.
Each circuit would be fused, perhaps 750mA? Maximum length of run might be 1 metre.
Could I get some ideas on what might be a suitable conductor size for this project? I already have some 24B&S 0.5mm dia. wire handy but can't seem to find consistent data regarding the current capacity (with information varying between 1.4A and 300mA). Something with a comfortable capacity of 1A or 1.5A maybe?
Thanks so much for your advice.
AI: I always like to consult this chart https://www.powerstream.com/Wire_Size.htm
So I would go with at least 27 gauge wire. It says it has a maximum current of 1.7 amps. This may also be different depending on how easily your wires can cool. If they are in plaster that may insulate them so you may have to go thicker. If you are using 24 Gauge I would think that would be fine.
Another note, if you use DC that will generate more heat, if you are only powering LED and incandescent bulbs why not use AC? AC will allow the wires to stay cooler. The LEDs may flicker but it shouldn't be too noticeable. If you have a 12 volt AC adapter that keeps the output AC I would try that out. |
H: Thévenin's theorem exercise, resistances
I've some problem identifying the \$R_{th}\$
simulate this circuit – Schematic created using CircuitLab
Removing the load and short circuiting the Voltage source we get :
simulate this circuit
I've drawn the equivalent to better state my thought process :
First of all, in both cases \$R2\$ doesn't matter since it is short circuited.
For the first picture, if we consider a current in the circuit it wouldn't flow towards \$A\$ right? (Since it is an open circuit) And hence we'll consider \$R1\$ and \$R3\$ in series?
In the equivalent though, it somehow feels like a current will go from \$A\$ to the circuit. But I guess it's the same thing, it's an open circuit so current only inside and thus \$R1\$ and \$R3\$ not in parallel right?Or do we imagine a wire between \$A\$ and \$B\$ as is the case with Norton?
In the process of finding \$E_{th}\$ I separated the circuit in two :
simulate this circuit
\$R_{th}\$ is \$R1\$ but I couldn't figure out \$E_{th}\$
$$E_{th}+R_1i_1-E=0 \Leftrightarrow i_1=\frac{E-E_{th}}{R_1}$$
$$E-R_2i_2=0 \Leftrightarrow i_2=\frac{E}{R_2}$$
And that's it, can't go further, The \$C\$ -> \$R_1\$ -> \$R_2\$ -> \$D\$ -> \$C\$ KVL is not useful. (I've 0 current values btw)
Thank you for your time!
AI: Between point \$A\$ and point \$B\$ the current can take two independent paths.
As I try to show here:
All this means that the \$ R_1\$ and \$R_3\$ are connected in parallel.
As for the \$E_{TH}\$ you do not need to separate anything.
simulate this circuit – Schematic created using CircuitLab
All you need to do is to find the voltage across the \$R_3\$ resistor.
And you do not have to worry about the \$R_2\$ resistor. As he will do not have any influence and \$E_{TH}\$ voltage because \$R_2\$ is connected across the ideal voltage source \$E_1\$. And the source fixes the voltage difference across the \$R_2\$ resistor.
So you are left with this circuit:
simulate this circuit |
H: why is this microcontroller brown-out level detection at 2.6V and 3.4V
The default brown-out detection level for the ATmega32u4 is 2.6V (typical) in the range of 2.4V to 2.8V. The table of possible values:
2.0V
2.2V
2.4V
2.6V default
3.4V
3.5V
4.3V
Unfortunately there is a large gap between 2.6V and 3.4V. Another possibility is "brown-out detection disabled" which is not in my table but it certainly exists.
My application uses an Adafruit board at 3.3V and 8MHz. The application is non-life threatening but accuracy is always of interest so I ought to benefit from brown-out detection, if feasible. This board aside, 3.3V seems to be a popular voltage for a microcontroller Vcc level so the default 2.6V brown-out level seems to be a good choice except for the fact that the datasheet also has this graph.
One interpretation is that at 2.65V (which is above the default brown-out level) a frequency of 8MHz is in some way unsafe and the safe frequency is somewhat lower by extrapolation. I don't want to run slower than 8MHz. Another interpretation is that the graph gives you information about the maximum frequency between 2.7V and 5.5V and no other conclusions should be drawn. There should be no extrapolation.
My question is: What were the design considerations that caused there to be a large gap between 2.6V and 3.4V on the brown-out level table? It seems wasteful to have to choose a 3.4V brown-out level in a battery powered application.
AI: You need to look at the tolerance of the brownout voltage threshold as well as the nominal value.
The 2.6V setting (and really, any of the settings) is useless for a 3.3V nominal supply at any clock frequency. It might work, but is that good enough for you? It isn't for me.
If you need a reliable BOD reset you'll have to supply it externally or use different chip.
Perhaps they committed to the settings before they characterized the chip and figured out that it will not operate reliably at (say) 1.8V. |
H: What program would be appropriate for designing a reimplementation of the TurboGrafX16 at gate level?
To be clear, I'm looking for a program I could use to redesign the TurboGrafX hardware at gate level, but to gate to gate, just for functionality.
Also, given that I'm interested in GPU design, what program do they most likely use for hardware design at a company like NVidia?
The only program I know of that could do this kind of thing is Logisim, which isn't industry standard, by a long shot.
AI: At Nvidia, they probably use cadence. I think that's pretty much the only option when it comes to designing digital circuitry of that scale on cutting edge processes. But don't bother trying to use that unless you are taking graduate level digital design courses and have access to the software through a university or similar, the software is incredibly complex and licenses run into the hundreds of thousands of dollars per seat. And for that kind of design, you're usually working at an RTL level or higher, so no schematics. |
H: Pull-down resistors on logic gate and decoder inputs?
I'm designing a very basic circuit for testing a switch sequence. I'm using a CMOS decoder CD4028B (link to datasheet) and XOR gate (CD74HC86E - link to datasheet). Reading the switch sequence is crucial and therefore I've been thinking whether I should add pull-down resistors to all inputs of the decoder and XOR gate to prevent pin-floating and false readings? I went through the datasheet and there was no recommendation whether pull-downs should be added to prevent pin-floating. Do these logic gates usually provide internal pull-down resistors or is it necessary to add them yourself? What values are recommended? I want to draw as little current as possible. I'm sorry if this is a really basic question. We made quite a few circuits with logic gates in college but never talked about the pin-floating and current draw aspect of these circuits.
Inputs of decoder are wired to the switch that is being tested, outputs of decoder are then wired to XOR gate inputs and XOR gate outputs are then wired to BJTs to drive LEDs.
Any help would be greatly appreciated. :)
Thank you.
AI: CD40xx and HC74xx circuits do not include pullups or pulldowns on their inputs.
CD40xx and HC74xx inputs must not be allowed to float. They must always be driven with a defined logic level, from a logic output, from a pullup or pulldown, or a switch to ground or rail.
If CMOS inputs are allowed to float, they could end up in the middle linear region, which at best causes nonsense outputs, and at worst causes oscillation which draws excessive power and causes the chip to fail through overheating.
As CMOS needs very little (nominally zero) (leakage only) current, very large resistors can be used as pullups or pulldowns if speed is not an issue. 1Mohm 100k would draw only a few 10s of uA, and would be more than enough to keep all types of CD40xx inputs biassed properly (WRB says he's found one brand that specifies 5uA leakage max, though I've only seen 1uA in my (admittedly quick) survey of them). |
H: Two capacitors of same value connected in series and connected to power supply instead of one capacitor of equivalent value for PCB design
what's the purpose of two capacitors of same value connected in series and connected to power supply instead of one capacitor of equivalent value for PCB design especially for Automotive applications
AI: Ceramic MLCCs by chance?
The things have a known failure mode (cracking) which usually results in the things failing short.
Placing two in series (ideally with different orientations) massively reduces the risk of an expensive in the field failure taking out something important.
There are so called 'soft termination' parts (All the vendors have their own tradename for these) that are massively less prone to this failure, but they are usually more expensive then two standard ones.
If you google "MLCC Crack" you will find much discussion on this. |
H: Convert ADC value to real voltage
I want to measure the input voltage to my circuit. The input voltage is supposed to be anything between 7 to 35 Volts.
I have a micro-controller with 12bit ADC. I am using a simple voltage divider to connect a fraction of input voltage to the ADC pin.
The voltage divider is simply a R1 = 82k on top and R2 = 5.6k resistor on bottom.
According to my calculation, at 35V I should get (never mind the tolerance):
V_IN = (V_SUPPLY * R2) / (R1 + R2) = (V_SUPPLY * 5k6) / (82k + 5k6) = 2.24
And the same calculation at 7 Volts I should get 0.4V into the ADC pin.
Now the question is, how to convert the raw value of the ADC to a voltage?
If I supply my circuit with 12V and if I did the programming of ADC correct, I am reading the raw value of around 840 out of my ADC.
The ADC reference voltage of controller is 3.3 Volts.
AI: V = (R/4096)Vref(82K + 5.6K)/5.6K
Which gives me about 10.6V, which I suspect is too far off from 12.0V to be explained by even 5% resistor tolerances and a few percent Vref tolerance. |
H: SSC TLE5012B read out
I have some trouble with understanding the following IC and its SSC interface:
https://www.infineon.com/dgdl/Infineon-TLE5012B_Exxxx-DS-v02_01-EN.pdf?fileId=db3a304334fac4c601350f31c43c433f
It is stated in the datasheet, that the SSC interface is SPI compatible, exactly what I need. Now first of all, how should I connect this SSC interface to an SPI interface? Can I do it as follows:
SSC -> SPI
CSQ -> SS
SCK -> SCK
DATA -> MOSI/MISO shorted together
Second, I'm not even sure if I can read out the absolute angular position of this sensor over the SSC interface alone, also I cannot find any list with all the available registers I can read/write to. I think this normally should be included in the datasheet. On the website of Infenion are no more datasheets available, that specifies this SSC registers. Anyone an Idea where this could be hidden?
AI: Register and SPI communication protocol details are in the User's Manual, not the datasheet. |
H: Communicating between 2 different microcontrollers
My Arduino runs at 16MHz clock speed; another microcontroller runs at clock speed of 13MHz. If I send digital output directly from an output pin of the former to input pin of the latter, there will be a loss of data and the transmission will be corrupted.
Question 1: How do I get the MCUs to properly transmit the data without any corruption? Do I need to sync them somehow, or I should use another device in middle as some kind of a buffer (or maybe to send data at a lower rate)?
Question 2: If I can send data at a lower rate from MCU#1 to MCU#2 will there be a phasing difference which would result in data corruption?
AI: The points you make are absolutely valid. What you missing are some details that you could get by reading more on communication protocols. Here are some of them.
The communication speed should always be chosen from the maximum required speed, not the one that can be achieved.
The reason for this is that with some exceptions (transceivers, buffers etc.) the data transferred must be either obtained or processed somehow. Input from human interface certainly works in completely different time dimension. And if your controller takes several seconds to process 1 MB of data, it would be pointless to transfer it at 16 Mbps speed.
Because signal-to-noise ratio decreases with distance, the maximum achievable bandwidth decreases too.
There is term "bandwidth-distance product" often used in communication. This is another reason why direct MCU-to-MCU connections rarely use that high data rates.
In modern MCUs the communication speeds are often independent from the CPU clocks.
For example, Xmega MCUs have peripheral clocks running at 2 or even 4 times the CPU speeds. Furthermore, controllers with USB interfaces often have their own oscillators.
Various communication protocols are now supported in hardware.
Synchronous protocols like SPI (or I2C on slave side) have their clock signals coming from the different MCU. So, the hardware can use that clock to shift data to/from the buffer and only involve processor at the end of a message. More advanced MCUs with DMA support can even move the data between different peripherals or memory without involving CPU at all.
Asynchronous protocols like UART or CAN require synchronized clocks. They begin timing at start bit and then sample the inputs once approximately at the mid-clock point (UART) or up to three times at about 75% of clock pulse (CAN). Obviously the data integrity depends on the clocking precision. While CAN controllers can adjust their clocks using phase shift information, more simpler UARTs can not.
One common practice to achieve better UART synchronization is to use crystals with frequencies easily divisible to common serial baud rates. For example, instead of running aforementioned Xmega controllers at their maximum 32 MHz you can often see them with 14.7456 crystals, running at 29.5 MHz.
Regardless of the protocol, the combination of hardware buffering and DMA transfers make transfer bandwidth fairly independent from CPU clocks.
Finally, when communication speeds go up it is more common to see separate controller chips rather than direct connection to MCU.
Not only that, but you can usually see parallel bus connection between MCU and high-speed transceivers like LAN or LVDS display. This is because the communication bus throughput becomes faster than can be passed in serial trough single MCU pin.
You've mentioned Ethernet in one of your comments. You should realize that with 1 Gbps Ethernet speeds no 16 MHz MCU stands a chance of processing that avalanche of traffic. For those speeds you should be looking at much more capable hardware, like used in RPi, for example.
By the way, this last point is just another form of point #1, i.e. if you cannot reduce data rate to something your MCU can handle, it logically follows that you need faster processor to deal with data flow. |
H: Why does my 3 V EL inverter make sound?
The question is purely academic, as normal sound dampening solutions seem to work fine.
I understand how electronic circuits make sound as per this answer:
How can "purely" electrical circuits emit sound?
My question is, a cheap 2 AA battery (3 volt) inverter makes as much sound as the cathode ray moving on an old tube television.
What part in it is "moving" that creates that much noise?
How much amplitude is created? and how? The sound can be heard from across quiet room.
Normally high frequency sounds are very easy to locate the direction. However, in this instance, why does the sound appear directionless, like a low frequency sound?
Is it possible to invert DC to AC without audible noise? Or is this a mechanical feature?
AI: I presume your inverter is operating at an audible frequency. If your transformer or inductor has multiple parts it could be one part moving relative to the other due to the magnetic forces or the magnetic materials changing shape due to magnetostriction. The usual solution is to increase the frequency to beyond the audible range, i.e. > 20 kHz. Regarding the perceived direction, could it be that the wavelength is short enough relative to the spacing of your ears for there to be ambiguity? ~23 mm at 15 kHz. |
H: Why Rod Core ignition coil in cars instead of E-I core or toroid core?
Why use a Rod Core ignition coil in cars instead of E-I core or toroid core? It appears the Rod Core coil in an automobile is just a Step-Up transformer. Why do car manufacturers us a ROD CORE? Is it superior for DC Pulses?
AI: Think of it as a coupled inductors. The energy for the spark is stored in the coil during the period when the contact breaker (or transistor equivalent) is closed. When the breaker opens, the primary inductance and the capacitor across the breaker form a resonant circuit so the voltage rises to approximately 250 V in a roughly half sine waveform. This is transformed to some thousands of volts at the secondary for the spark. If the magnetic circuit were closed, the inductance would be higher, the rate of increase in current would be lower and the core would be more likely to saturate. In a capacitive discharge system the energy is stored in a capacitor rather than the coil so I imagine a transformer would work in this case. |
H: Deriving the Integral Voltage-Current Relationship of a Capacitor
I am having trouble understanding the derivation of the capacitor voltage equation in my circuits textbook.
Here is the process they followed from the textbook
My confusion is: when the initial voltage across the capacitor is not able to be discerned, that it is "mathematically convenient to set t0 = −∞ and v(−∞) = 0"
Why would t0 be set to −∞ and wouldn't v(−∞) = −∞ not 0?
AI: In the interest of having an actual answer posted after working this out in comments:
This is not a universal rule. It only works for certain types of current waveforms, specifically when
$$\lim_{t_0\to -\infty}\int_{t_0}^{t_1} i(t) {\rm d}t$$ is well defined for some finite \$t_1\$.
It is reasonable to assume the "initial" voltage as \$t\to-\infty\$ is 0 because the leakage current in real capacitors means that any charge that was present a sufficiently long time ago will have leaked away and not affect the behavior circuit in the time range we're actually interested in. |
H: Determine MC34063AP Duty Cycle
A fuzzy area I want to clear up on the MC34063AP is Duty Cycle.
In the MC34063AP datasheet, toff an ton are a result of 1 / f, which is obvious.
toff is calculated by: 1 / f / ton / toff + 1
ton is calculated by: ( 1 / f ) - toff
The timing Capacitor CT is given by: 4.0 x 10 to -5 ton
What I can not find, and seems there is no logical direction, is what component, Resistor Capacitor or Inductor determines the Duty Cycle.
If, for example, I had an existing driver, and I wanted to modify the duty cycle, how would I go about it?
Configuration: Step Down
AI: Look up the AN920A/D and slva252b.pdf (Texas Instruments) for all the gory details on how this chip works. There are several free MC34063 calculator programs available that help out with the component values.
It's not a super trivial process to do by hand.
The maximum Ton:Toff ratio is 6:1, assuming no current limits are reduce that and are controlled by the charge capacitor and the chip current sources, reference sections 1.2-1.4 of the TI note. |
H: use sram as logic analyzer?
So I was thinking of buying a logic analyzer and found that a lot of the cheap ones are good to only a few mhz, and the microcontroller based ones can't have big buffers. So I was wondering if I could just use ram as a buffer? Like if it has 16 data lines, a 20 bit address, could I just feed 16 digital inputs (the ram I'm looking at reads >2.2V as HIGH so should be compatible with 3.3V logiv), and a 20 bit address counter run by a very carefully picked out crystal or some other kind of stable clock generation? I figure if the ram has a 45 ns write time, I should be able to easily get 22 odd MHz for only a few bucks, and then read it back with a slow micro controller or something, it doesn't need to access that data very fast.
However, I find that usually when I (and many other newbies) have these "why doesn't anyone do this?" ideas, there's usually a good reason.
AI: What you describe is more of a data recorder than an logic analyzer. A logic analyzer includes the ability to generate a trigger based on the input data. It stops storing when the trigger sequence and/or pattern is detected so that you can examine the stored data.
Also, you would need to add an external register between the outside world and the SRAM that is clocked by the same clock as the SRAM so that the data is stable as it is being written.
With some refinements, what you describe is possible and you would certainly learn a lot implementing it. To save time, you could perhaps even use a low cost FPGA development board that has some built in memory as a starting point.
However, if you just need a tool to use now, it's probably best to get the cheapest USB based analyzer with enough memory and speed that you can find. |
H: How to produce a middle C on intel 8080?
This is a example problem in my book. Assuming pin 5 of port 4 is connected to an amplifier that drives a loudspeaker, the solution is given as,
The frequency of middle C is $$f=261.63\ \text{Hz}$$
So, the time period is,
$$T=\frac1{f}=3822\ \mu s$$
The program to produce a square wave with that period is,
LOOP1: OUT 4H ;Send bit to speaker
MVI C,86H ;Set count to 134
LOOP2: DCR C ;Count down
JNZ LOOP2 ;Test count
CMA ;Reset bit 5
NOP ;Fine tune
NOP ;Fine tune
JMP LOOP1 ;Go for next half cycle
The number of T states is given as, OUT(10), MVI(7), DCR(4), JNZ(10 if true, else 7), CMA(4), NOP(4), JMP(10).
With a clock frequency of 1 MHz, LOOP2(for half cycles) runs for 1912 microseconds, which is close enough. LOOP1 should run again sending the complement of what was previously in bit 5 of port 4. But I think it doesn't.
When LOOP2 ends, the accumulator has 00H left over from C register. CMA changes the accumulator to FFH. NOP and JMP don't change the accumulator. So then the LOOP1 iterates for the next half cycle, OUT sents accumulator contents to port 4, i.e, FFH whose bit 5 is 1, everytime. So there is not a square wave, it's just a high signal. Then how does it produce a middle C?
AI: DCR C decrements the C register and sets flags; it does not affect the accumulator (aka A register). The only instruction in this sequence which affects the accumulator is the CMA. Thus on each pass through LOOP1, the accumulator will be complemented - bit 5 high on one cycle and low on the next.
Many sources including the 8085 datasheet describe the 8080/8085 ALU as operating directly on the accumulator, but this is an oversimplification. As described in this Ken Shirriff article, the ALU has two temporary registers:
The ALU uses two temporary registers that are not directly visible to the programmer. The Accumulator Temporary register (ACT) holds the accumulator value while an ALU operation is performed. This allows the accumulator to be updated with the new value without causing a race condition. The second temporary register (TMP) holds the other argument for the ALU operation. The TMP register typically holds a value from memory or another register.
...
The ACT register has several important functions. First, it holds the input to the ALU. This allows the results from the ALU to be written back to the accumulator without disturbing the input, which would cause instability. Second, the ACT can hold constant values (e.g. for incrementing or decrementing, or decimal adjustment) without affecting the accumulator. Finally, the ACT allows ALU operations that don't use the accumulator.
For the DCR instructions, the ACT holds a constant, the TMP receives the current contents of the operand register, and an ADD operation is performed; for DCR C, the accumulator remains untouched:
The control lines allow the ACT register to be loaded with a variety of constants. The 0/fe_to_act control line loads either 0 or 0xfe into the ACT; the value is selected by the sel_0_fe control line. The value 0 has a variety of uses. ORing a value with 0 allows the value to pass through the ALU unchanged. If the carry is set, ADDing to 0 performs an increment. The value 0xfe (signed -2) is used only for the DCR (decrement by 1) instruction. You might think the value 0xff (signed -1) would be more appropriate, but if the carry is set, ADDing 0xfe decrements by 1. I think the motivation is so both increments and decrements have the carry set, and thus can use the same logic to control the carry.
Since the 8085 has a 16-bit increment/decrement circuit, you might wonder why the ALU is also used for increment/decrement. The main reason is that using the ALU allows the condition flags to be set by INR and DCR. In contrast, the 16-bit increment and decrement instructions (INX and DCX) use the incrementer/decrementer, and as a consequence the flags are not updated.
There is only a single set of flags in the 8080; DCR affects the Zero, Sign, Parity, and Aux-carry flags. |
H: Relation between bandwidth, response time and disturbance rejection for a control system
I read the following line in a control system book:
Higher bandwidth corresponds to better command following, disturbance rejection and sensitivity to parametric variations. On the other hand, achievable bandwidth is limited by presence of noise and dynamic uncertainties.
Can you please explain how " Higher bandwidth corresponds to better command following, disturbance rejection and sensitivity to parametric variations"
AI: Every system has a time and frequency spectrum of inputs and outputs determined mass & energy storage in the system, choice of inputs, sensors and tolerance vs resolution vs time response. The ideal system matches the signal bandwidth to the desired spectrum to get the desired response with many choices of criteria;
Maximum SNR, minimum group delay, minimum overshoot, minimum step error, maximum disturbance rejection, linear phase, min. static error, dynamic error, min. Integrated squared error, optimal anticipated min. error, maximal interference jitter rejection, min BER, max fault tolerance, max efficiency, max climatic stability, min. FIT rate or max MTBF with redundancy, optimal fault detection/correction/recovery.
You must define the mission, control system with a hierarchical-input-process-output or HIPO design with full design specs. Then the spectral and time response are just necessary parameters to specify for the solution.
The best simple explanation is the step response rise time from 10 to 90% is inversely proportional to the loop bandwidth at half signal power output as a function of frequency.
Tr=0.35/f
for f = max freq. BW @ -3dB. But the dampening factor or it’s inverse , Q must be low to reduce ringing or overshoot with compensation to ensure proper stability.
Road Analogy;
too much delay in feedback might look like a drunk driver wandering off centre or , hyperactive might kill someone in the next lane to avoid a sitting duck, or a tired or lazy driver might have more risk to accidents sampling the road traffic every few seconds rather than being attentive with a visual 25 frames per second. |
H: Differential equations for a transformer
I would like to set up differential equations (please, no phasors in answers) for the circuit below:
So far I have two equations:
where I assume perfect magnetic coupling (so I treat M as a known constant).
Initial conditions are that at time=0, both i1 and i2 are zero.
The system is underdetermined and I have to put one more equation. What equation is it?
My intention is to solve for i2. Again, please no suggestions with phasors. I am looking into a more general case where v1 isn't necessarily a perfect sinusoidal function.
AI: You have two unknowns, \$i_1, i_2\$ and two equations, so your equations are solvable.
Solving these equations are generally done using the Laplace transform.
$$\left\{ \begin{align}
i_1R_1 + L_1\frac{di_1}{dt} - M\frac{di_2}{dt} &= v_1(t)\\
-M\frac{di_1}{dt} + i_2R_2 + L_2\frac{di_2}{dt} &= 0
\end{align}\right.$$
Leads to
$$\left\{ \begin{align}
(R_1+L_1s)&\cdot I_1 &- Ms\cdot I_2 & = V_1(s)\\
-Ms&\cdot I_1 &+ (R_2+L_2s)\cdot I_2 & = 0
\end{align}\right.$$
$$I_2 = \frac{\left|\begin{matrix}
R_1+L_1s & V_1(s) \\
-Ms & 0
\end{matrix}\right|
}{\left|\begin{matrix}
R_1+L_1s & -Ms \\
-Ms & R_2+L_2s
\end{matrix}\right|
}=\frac{Ms\cdot V_1(s)}{(R_1+L_1s)(R_2+L_2s)-M^2s^2}$$
If you prefer differential equations you can always go back using:
$$\begin{align}
\left[(R_1+L_1s)(R_2+L_2s)-M^2s^2\right]\cdot I_2(s) &= Ms\cdot V_1(s)\\
&\Downarrow\\
\left[R_1R_2 + (R_1L_2 + R_2L_1)s + (L_1L_2-M^2)s^2\right]\cdot I_2(s) &= Ms\cdot V_1(s)\\
&\Downarrow \mathcal{L}^{-1}\\
R_1R_2\cdot i_2(t)+(R_1L_2+R_2L_1)\frac{di_2}{dt}+(L_1L_2-M^2)\frac{d^2i_2}{dt^2} &= M\frac{dv_1}{dt}
\end{align}$$ |
H: How to know voltage used on a data bus?
How do you find out what voltage is being used on a data bus for the high signal?
On a small electronics I guess you can assume it will be 3.3v or 5v but what about other circuits, say in a car or on the side of a TV or some old random electrical equipment without knowing the spec of a port.
If I wanted to connect say a logic analyser to some digital bus, not knowing anything about it, how would I first find out the voltage to make sure I don't blow up my logic analyser and then, once I know what the peek voltage will be, so I'll not blow up my logic analyser by connecting it, how do I bring the voltage down to the right level (Some sort of Attenuator?) so that I can see the data on the bus?
AI: If you have a data bus then you'll probably have a clock signal and that clock signal will be about 50% duty cycle. So, using a DC voltmeter you could measure the clock and ascertain the average DC voltage then multiply that by 2 to get the logic 1 voltage (approximately).
If you don't have a clock signal then try measuring a data signal with the voltmeter and usually you will see about 50% of the logic voltage if the data signal is actively sending data. |
H: 12 V input to multiple 5 V outputs?
I have that project, where I have as an input voltage 12 V (lead acid battery) and would like to have 4 output of 5 V 1 A that would be switched on and off with a mosfet.
I'm thinking which way you would recommend me to proceed.
Option 1 : Use a single voltage converter 12 V to 5 V with 4 A( for the 4 outputs) and then have the 4 mosfets connected to the 5 V line of the output.
Option 2 : Use 4 voltage converters 12 V to 5 V with 1 A for each outputs connected to the battery and connect each output to a mosfet.
Thanks in advance!
AI: Option 1 is the simplest and probably least expensive, since you only have a single supply plus 4 MOSFETS.
Option 2 could be done without MOSFETs by using a regulators that have an enable input.
Note that because of the large voltage differential and high current, a switching supply is the best choice. That means that option 2 has 4 inductors, which could be a significant difference in size and cost.
A third option would be to use a single high current switching supply to convert 12V to ~5.5V, and then use low cost linear LDOs with enable inputs to convert the 5.5V to 5V. This has the added benefit of reducing the switching noise in the 5V output, since the final stage is a linear regulator. |
H: Compile time calculations in C
I have been implementing a control software in C programming language. One of
the software modules represents a communication table for some proprietary
communication protocol. Each record in this table has one item (among others)
consisting of 8 bits. This item represents properties associated to this record.
Each record can has below given properties:
reported,
event_logged,
time_logged,
archived,
global
I need to define the communication table and for sake of readibility I don't
want to fill the properties of each record with values 0-31. My idea was to
at first define below given bit masks for individual bits in the properties
byte:
#define REPORTED 0x10
#define EVENT_LOGGED 0x08
#define TIME_LOGGED 0x04
#define ARCHIVED 0x02
#define GLOBAL 0x01
Then I wanted to define a macro with parameters which will prepare the content
of the properties based on human readable values
#define Create_properties(reported, event_logged, time_logged, archived, global){ \
(uint8_t)reported | (uint8_t)event_logged | (uint8_t)time_logged | \
(uint8_t)global \
}
The planned usage is following. For example one of the records in the communication
table will be event logged and time logged so I will write the macro in this manner
Create_properties(0, EVENT_LOGGED, TIME_LOGGED, 0, 0, 0)
and the expected result is 01100. I have been facing a problem that I am not
able to compile this source code. I always receive compiler error
"braces around scalar initializer". I have been using the gcc compiler. Does
anybody know why does this error occur? Thanks for any ideas.
AI: Curly braces {} are not used for macro definitions. Try removing them and see if it works:
#define Create_properties(reported, event_logged, time_logged, archived, global) \
(uint8_t)reported | (uint8_t)event_logged | (uint8_t)time_logged | \
(uint8_t)global |
H: Using P-channel FET as switch
I'm designing a charger circuit. When the input (+5v) is available it should disable the booster regulator by setting Booster_en to low. I used a single P-ch MOSFET like below circuit. I added pull-down to the gate of FET for when there is no (+5v) it turns on the FET and Booster_en gets high. I found many examples that use a pull-up circuit on FET's gate and some of them use a Schottky diode before VCC. Do I need to use two resistors or there are simpler ways? How about the diode?
EDIT:
Here is the first version of circuit that the problem was on Booster_en that becomes floating when VUSB is off. On the other side, I want to fully isolate VBat when +5v (VUSB) exists. I also removed the Q4 transistor and connected R9 to ground. I think it is useless because when there is no VDD for charge it would turn off.
AI: Personally, I find the most reliable method of switching a PMOSFET is by using an NMOSFET. The circuit I generally tend to use is like this:
simulate this circuit – Schematic created using CircuitLab
With this circuit, R1 is pulling up the gate of M1, keeping it off. R2 is pulling down the gate of M2, also keeping it off. Once the 5V is applied to the gate of M2, it will pull the gate of M1 low, which will allow M1 to enable the Boost_EN pin.
This is a circuit I have used many times and it has always been reliable.
Edit due to addition of circuit in original question
The original wording of the question sounded like the EN pin was needing to go HIGH when 5V was applied. Now it seems that the intention is for the pin to go LOW when 5V is applied. This simplifies things. Replace M1 with a pull-up resistor:
simulate this circuit
With this circuit, your EN pin is always pulled HIGH by the resistor R1. When 5V is applied, M2 will pull the pin LOW. |
H: How does a rocking arm voltage regulator work?
I am trying to determine the suitability of replacing a (ancient) rocking arm voltage regulator with a modern digital device.
I assume a rocking arm regulator has an electromechanical or magnetic feedback mechanism, but I have not been able to find a detailed description of its operation or much information on how quickly such regulators respond to transients.
Does anyone with experience have an explanation of how they work? Are these devices mag-amp based? Are they known by any other names?
AI: Pete Becker found a good description on this forum, to wit:
Inside the voltage regulator you'll find an array of carbon segments arranged in a semi-circle. A pivoting arm rocks back and forth across the carbon segments (hence the name "rocking contact").
A spring tends to set the adjustment arm in a position for minimum resistance (= more field current = more output voltage.) The tension of this spring is counteracted by an electromagnet that moves the adjustment arm in a manner to raise resistance in response to the generator voltage. Sort of like a giant rheostat, but the arm compresses the carbon segments instead of just sweeping across them.
In other words, No/low voltage - spring pushes the arm for minimum resistance. High voltage - electromagnet (connected to generator output) counteracts spring for more resistance. Correct voltage - spring and electromagnet tension are equal, arm stays where it is. A dashpot or some other form of 'shock absorber' is usually used to damp the movement of the arm.
— B.Sparks
Response speed is going to be controlled by the mechanical inertia of the parts, combined with the action of the dashpot (if any).
Replacing this with an electronic circuit should be straightforward. You need a transistor that can handle the field current and voltage, and a comparator that monitors the output voltage relative to a setpoint to drive it. Be conservative with gain and bandwidth — stability is the key here. You don't want this going into oscillation or even exhibiting any significant overshoot. |
H: Conversion from dBm to dB\$\mu\$V?
I am trying to derive the conversion from dBm to dB\$\mu\$V. So, please correct me If I am wrong.
\begin{align}
\textrm{dBm}
&= 10 \log_{10} \left( \frac{P}{1\textrm{mW}} \right) = 10 \log(P) + 30 \\
&= 10 \log_{10} \left( \frac{ V^2}{Z} \right) + 30\\
&= 10 \log_{10} \left( V^2 \right) - 10 \log_{10}(Z) + 30\\
&= 20 \log_{10} \left( V \right) - 10 \log_{10}(Z) + 30\\
&= 20 \log_{10} \left( \frac{V}{1\mu V} 10^{-6} \right) - 10 \log_{10}(Z) + 30\\
&= \underbrace{20 \log_{10} \left( \frac{V}{1\mu V} \right)}_{\textrm{dB}\mu V} + \underbrace{20\log_{10}\left( 10^{-6} \right)}_{-120} - 10 \log_{10}(Z) + 30\\
&= \textrm{dB}\mu V -120 - 10 \log_{10}(Z) + 30\\
&= \textrm{dB}\mu V -90 - 10 \log_{10}(Z) \\
\Leftrightarrow \\
\textrm{dB}\mu V &= \textrm{dBm} + 90 + 10 \log_{10}(Z)
\end{align}
AI: dBm is short for dBmW. The measurement is milli-watts.
dBuV is short for dbuV. The measurement is in micro-volts.
In order to convert from volts to watts you have to assume some sort of impedance.
For a free space propagation of an RF signal we assume the impedance of free space (which is 376.73 ohms).
Voltage and power are related as follows...
W = V^2 / Z
Or
V = sqrt(W * Z)
Assuming a 1mW signal and Z = the impedance of free space we get...
V = sqrt(0.001 * 376.73) = 0.61378
in db-volts this is...
dBV = 20 * log10(0.61378) = -4.2dBV
To convert dBV to dBuV we just multiply by 1 million (so add 120dB)
-4.24dBV = (-4.2 + 120)dBuV = 115.8dBuV
Therefore 1dBmW = 115.8dBuV when Z = 376.73 ohms
This matches the conversion chart you linked to.
Your derivation above comes up with the same answer if you put in 376.73 ohms for Z.
dBuV = dBm + 90 + 10*log(376.73) = dBm +90 + 25.8 = dBm + 115.8 |
H: Display Amber/Orange on LED Strip with no control module?
I have a set of LED strips that have the following four wires:
Positive
Negative
Red Channel
Blue Channel
Green Chanel
Given these wires, I can produce the following colors without additional components:
Red
Blue
Green
Purple (Combine red and blue.)
Cyan (Combine green and blue.)
Yellow (Combine red and green.)
White (Combine red, green, and blue.)
However, the control module (now fried due to a crossed wire) has an option for amber/orange. The question is, how can I produce this color? I believe I may have to combine green and red to produce yellow, and add a resistor or similar to the green channel to reduce the current to that color thus displaying more red and possibly producing the desired color.
Additional Details
The lights are powered by a 12v source (car battery).
How can I use the above to create an amber/orange color from the lights?
If the answer is simply adding a resistor, what is the lowest resistor I should use in the 12v circuit?
AI: The resistor value you want will vary based on the amount of green you want, or the amber in general you want to see. It's safe enough that you can try a bunch of resistor to figure it out. Anything from 10 ohms to 500 ohms should be enough. You can put it on the negative side without issue.
Amber is, in rgb hex values 255, 191, 0. So green is roughly 75% of full brightness. Reducing it just a bit should help produce a nice orange color.
Once you found which value works best for you, measure the voltage across the resistor, and using ohms law, you can find the wattage needed for the resistor. P = (V^2 ) / R or V * I. The more current your led strip uses the higher wattage resistor you'll need. |
H: Flux protection vs. sealed relay enclosures
When reading the datasheet for the Omron G5Q series relays, there are two options for enclosures: flux protection and sealed.
The price difference is almost a factor of two, with the former being the more expensive. Is there an application where flux protection is required, or where a sealed enclosure would suffice?
AI: Relays can have a number of different levels of sealing, from being completely open to the atmosphere, to full hermetic sealing. Which you need depends on your application, but also how you plan to solder it to the board.
The technical guide would seem to be your go-to source. Take a look at page 2 (by the document's page numbering), and see how they define their levels of sealing. I would assume "Flux Protection" maps to "Semi-Sealed", and "Sealed" maps to "Fully Sealed", but you may need to contact Omron if you need more assurance than that.
If I'm right, then the major difference is often that during PCB Assembly you can do automatic washing after reflow with sealed relays. But read through the technical data to be sure that you're making the correct choice for your application. Other factors like environment and conformal coating can play a part. |
H: What is the typical voltage of power distribution in rural North America?
I mean the electrical utility power that connects to the transformer outside of the home or other buildings. i.e. what is the primary winding voltage of the transformer having 220 VAC center-tap secondary?
i have heard as high as 13 kV, but that seems kinda high. isn't it more like 4 kV?
AI: 12470V (delta)/7200V (Wye)
13200V (delta)/7620V (Wye)
24940V (delta)/14400V (Wye)
34500V (delta)/19920V (Wye)
source is bulletin 1724-D-114 USDA rural distribution system |
H: In general, how do I know if my project needs a file system?
This question gives a good high-level overview of when an operating system is appropriate on an embedded platform. The three topics mentioned are networking requirements, GUI requirements, and file system requirements.
Networking and GUIs are obvious enough, but I am wondering when a file system is appropriate and/or required for an embedded system. What sort of embedded tasks lend themselves to a file system solution?
Maybe something like a data collection application, where previous measurements or log files need to be stored for future reference?
AI: The most obvious situations where I find a file system desirable:
Whenever you have to exchange data with an external system (eg. a PC). Definitely a FAT file system is the easiest solution when using an SD card or USB stick (example: datalogger, preparing config files on a PC).
You use NAND storage. You need a file system that is NAND-aware, because all NAND chips have bad sectors and the file system has to work around them. |
H: Why we need to have a current output on power supply to drive some devices (such as curve tracer)?
Last week, I just had an experiment on lab, in which I need to use a transistor curve tracer board and thus was asked to connect it to a digital supply.
I connected them and sat voltage output of power supply as 12 V (as said in my lab note) and 0A. However, the tracer did not work correctly. I asked my mentor and he said I need to set a current output like 1 mA to supply enough supply, which finally worked.
My question is, why we need a current output on power supply or why it won't have a current output automatically? I do understand that if the power supply has 0 A current, based on P = UI, there is no 0 W of power supply. However, if I connect a 12V and 0A power supply to a 1k ohms, it will have a 12 mA current on the circuit, which means the power supply provide current "automatically"?
AI: You mention a current output but that's not entirely correct. You are probably talking about a current limit setting.
That means the following, say we set the supply to 12 V, 10 mA:
As long as no more than 10 mA is supplied to the load, the voltage will be 12 V. The supply will work in constant voltage mode often shorted as CV.
If you connect a load that wants to make more than 10 mA flow at 12 V, the supply will lower the voltage such that 10 mA flows. It will be in current limiting mode or constant current mode often shorted as CC.
In case you use a resistor as load we can use Ohm's law to find the resistor where the change between CV and CC happens: 12 V/10mA = 1.2 kohm, so a resistor with a value of more than 1.2 kohm will result in a current of less than 10 mA so it will get 12 V (CV). A resistor with a value below 1.2 kohm will want to make more than 10 mA flow at 12 V, the supply does not allow this and will reduce the voltage such that no more than 10 mA flows (CC).
However, if I connect a 12V and 0A power supply to a 1k ohms, it will have a 12 mA current on the circuit
No, that is not correct, if you set the current to 0 A so no current will flow.
To make the 12 mA you have to set the current limit setting to at least 12 mA. |
H: How can I use an IC family or adapter to switch automatically from solar cell to batteries & reverse?
simulate this circuit – Schematic created using CircuitLab
How can I use an IC family or adapter to switch automatically from solar cell to batteries & reverse ?
if no load => solar cells charges the batteries
if there is a load
then
if p2 > p1 then trigger makes comparator connects load power from CELL
else (p1 > p2) then trigger makes comparator connects load power from batteries
else
CELLS charges the batteries
fi
fi
Is there some specialized IC to manage something like this as "comparator"
is this a bad approach of the need? (and then if you have ideas please provides them).
This is not really similar to : this question ...
AI: You probably want a battery charger/manager IC.
They come in all sorts of flavors, so you'll need to look for one that meets your needs. They can be simple "smart" switches, or more complex, incorporating DC-DC converters and possibly even peak power tracking on the solar cell.
I would start with a Digi-key search for "battery charger" (https://www.digikey.com/products/en/integrated-circuits-ics/pmic-battery-chargers/781), and I would also browse the power management ICs from Analog Devices (https://www.analog.com/en/products/power-management/battery-management.html). I'm sure countless other IC manufacturers have good selections as well, but it will take some browsing to find the right part for your situation.
Here's an example of a part that might work for you: |
H: Increasing Amp Output of a battery
So I have battery cells with a rated capacity of 3.5Ah and a nominal voltage of 3.5v Now, say I have a system that requires 12A for a period of 10 seconds. For this battery it is advised not to discharge beyond 2C or the efficiency hit becomes unreasonable. From my understanding, I can increase the amount of batteries in parallel to increase the capacity, but cannot increase the available current. Correct? Will this cell be unable to meet the 12A requirement? I think I'm missing a concept here. At 1C this battery can discharge 3.5A for 1hour. So then can it discharge 14A for 15 minutes then? If that's the case, then can I not use a much lower c rate to achieve this requirement? at C/10 I discharge .35A for one hour. Or it could be more if I decrease the time, possibly getting me my 12A for 10 seconds. What is the limit on this?
AI: For this battery it is advised not to discharge beyond 2C
A 2C discharge rate for a 3.5 Ah battery would be 7A. So, the manufacturer is recommending that you do not draw more than 7A from a single instance of this battery.
From my understanding, I can increase the amount of batteries in parallel to increase the capacity, but cannot increase the available current.
This is partially correct. By placing multiple batteries in parallel, you do increase the capacity, and you CAN increase the available current. In fact, most battery packs have multiple cells both in series, to increase the available voltage, as well as in parallel, to increase the available current.
With two of your 3.5Ah batteries in parallel, you'd have 7Ah of capacity, and your 2C discharge limit would be 14A. Two batteries in parallel should be able to handle your 12A load safely.
PS: Just be careful when trying to recharge these batteries, especially if they are Lithium Polymer. LiPos require careful cell balancing in order to be charged safely. |
H: Find the best components and ICs
How do a newbie find the best components ? is there site or something ? the part number is not exactly user friendly. I know there are the classics, that I suppose improve over the years, but there are also modern alternatives...
AI: it depends vastly on what you are trying to accomplish.
the best component is one which can do the job you want it to do, for as long as you want to do it, where you want to do it, so as to speak. There are some components, particularly aerospace-type microcontrollers come to mind here, that are extremely expensive. this is mainly due to the testing and additional radiation hardening involved, but there is no way you would use one in a completely unrelated application, like a TV remote or some such.
The best approach is to define a full and comprehensive specification for what you want to do, and then select components based on that. This can go out the window a little bit in industry, as some companies have huge stocks of some components they just want used, so you can see micros lurking in some devices that are definitely overspec for what they are doing.
after that, find a good online shop like digikey, mouser, element14 or RS, and use their parametric filters to select the right component.
some things like capacitors perform best at certain frequency ranges, for example electrolytics are only really suited to lower frequency, such as ripple blanking on power supplies, whereas ceramics are used for high frequency applications, such as removing DC bias from some AC signal, such as audio. carbon film resistors also tend to add noise, and wirewound have an inductance all of their own...
I hope this helps, otherwise perhaps consider re-phrasing your question. |
H: What PWM inputs are needed to drive a 3 phase BLDC using the HIP4086 IC driver chip?
I am building a motor controller for a BLDC (w/ Hall sensors) using a Teensy. My circuit looks like the typical application diagram in the datasheet (https://www.intersil.com/content/dam/Intersil/documents/hip4/hip4086-a.pdf):
My micro controller correctly reads in the Hall sensor inputs and can identify which position the rotor is in. My question is, if I want to pull (for example) Phase A high (+), Phase B low (-), and keep Phase C floating, what PWM should I be writing to the pins AHI, ALI, BHI, BLI, CHI, and CLI. I know this is probably clear in the data sheet, but I am having trouble understanding the switching logic of the triple half bridge.
AI: The schematic you show has the !HI and LO inputs paired together. This means that you only need one signal to drive each phase, but that either the top or bottom FET in each pahse will always be conducting. This is commmon with a sinusoidal drive, where each phase is fed a continuously varying PWM to approximate a sine voltage at the motor terminals, but as you indicate that you want to have one phase floating, I assume you're aiming at a trapezoidal drive - which you can't achieve with that arrangement. You'll need to control all six inputs independently.
For each possible arrangement of Hall sensors, which can be disposed in various ways around the stator, and winding (phase windings can be connected either way into the wye or delta pattern, reversing the physical direction of rotation) there'll be a table of required switchings - this should be available from the motor manufacturer.
This is taken from the datasheet for the DRV8307 showing a typical table. Since your HI inputs are inverted, you'll need to invert the signals to them.
The note (1) about synchronous rectification is a bit of a misnomer (that's used in switchmode supplies) but operates the same in inverters - it switches on the FET that would otherwise have current circulating through the body diode, which reduces losses and improves efficiency. |
H: STM32 SPI not working as I expect it should based on online reading
I am using an STM32F103C8 to connect to a Hope RF95W transceiver IC for the purpose of learning.
I am only trying to read the chip version register, then write a config register that has a 0x00 reset value and then read it to make sure my write code works.
The RF95W datasheet says about SPI transfer:
SINGLE access: an address byte followed by a data byte is sent for a
write access whereas an address byte is sent and a read byte is
received for the read access. The NSS pin goes low at the beginning of
the frame and goes high after the data byte.
MOSI is generated by the master on the falling edge of SCK and is
sampled by the slave (i.e. this SPI interface) on the rising edge of
SCK. MISO is generated by the slave on the falling edge of SCK.
A transfer is always started by the NSS pin going low. MISO is high
impedance when NSS is high. The first byte is the address byte. It is
comprises:
A wnr bit, which is 1 for write access and 0 for read
access.
Then 7 bits of address, MSB first.
To me this means that reading a register requires only one transfer. Sending the address of the register, then waiting for the SPI_I2S_FLAG_RXNE flag, then the value of this register will be in the SPI data register.
However what's happening is I need two sequences of these write/flag/read operations to get the value:
uint8_t read_register(uint8_t address){
uint8_t data;
GPIO_ResetBits(GPIOA, GPIO_Pin_3); // slave select (low)
delay_ms(100);
while(!SPI_I2S_GetFlagStatus(SPI1, SPI_I2S_FLAG_TXE));
SPI_I2S_SendData(SPI1, address); // send
while(!SPI_I2S_GetFlagStatus(SPI1, SPI_I2S_FLAG_RXNE));
data = SPI_I2S_ReceiveData(SPI1); // read received
while(!SPI_I2S_GetFlagStatus(SPI1, SPI_I2S_FLAG_TXE));
SPI_I2S_SendData(SPI1, address); // send
while(!SPI_I2S_GetFlagStatus(SPI1, SPI_I2S_FLAG_RXNE));
data = SPI_I2S_ReceiveData(SPI1); // read received
GPIO_SetBits(GPIOA, GPIO_Pin_3); // slave deselect (high)
return data;
}
Only by having these send/receive and then another send/receive can I get the expected value from the register. If don't make the second send, the receive returns a 0x00.
Writing is a little more straightforward, because it's a write of address, followed by a data write.
void write_register()
{
uint8_t numRead1 = 0;
GPIO_ResetBits(GPIOA, GPIO_Pin_3); // slave select (low)
delay_ms(100);
while(!SPI_I2S_GetFlagStatus(SPI1, SPI_I2S_FLAG_TXE));
SPI_I2S_SendData(SPI1, 0x40 | 0x80); // send
while(!SPI_I2S_GetFlagStatus(SPI1, SPI_I2S_FLAG_TXE));
SPI_I2S_SendData(SPI1, 0x7E); // send
while(!SPI_I2S_GetFlagStatus(SPI1, SPI_I2S_FLAG_RXNE));
numRead1 = SPI_I2S_ReceiveData(SPI1); // read received
GPIO_SetBits(GPIOA, GPIO_Pin_3); // slave deselect (high)
}
In this case, I expect numRead1 to be 0x00 which is the default value of the register I am writing to. A subsequent call to read_register with the same address returns the value I wrote to it.
But I am sure there's a fundamental mistake in what I am doing which if not corrected will lead to bigger problems down the road.
Any thoughts?
AI: The Hope chip has to get the address of the register you wish to read, that happens on the first byte transfer, at which point it's loaded into the Hope SPI MISO register to be transmitted as the next (dummy) byte is sent out by the master (since it's a read operation).
Alternatively the next byte transmitted out can be another read/write operation, but the pump has to be primed so to speak, with reads being sent back on the next byte SPI transaction. |
H: Taking Quizbowl buzzers and connecting to a computer
Recently I Bought quizbowl buzzers from buzzersystems.com and the buzzers i received use an rca cable.
If you did not know basically, when you click a button, it sends a signal to a box which receives it and lights up/makes a sound.
Is there anyway that I can somehow connect this to my computer so that every time I click the button, it sends the signal of the enter key on my keyboard? Preferentially something that uses usb to reconfigure the signal.
Sepcs
22AWG, uL2464#22x2c cord
Normally open, momentary push-to-close switch
0.050 ohm (max when new, measured at button)
* AC specs (current and voltage): 3A @ 125 VAC 1.5A @ 250 VAC (250VAC is max AC voltage)
* DC specs (current and voltage): 24 VDC (max),10A (max) Note: LOW DC VOLTAGE!
Anyone know if this is equivalant to what type of normal rca cable? Audio, video,mono or dual channel etc.
AI: Is there anyway that I can somehow connect this to my computer so that every time I click the button, it sends the signal of the enter key on my keyboard? Preferentially something that uses usb to reconfigure the signal.
Yes you can do both.
Find a secondhand USB keyboard. This shouldn't cost you anything. Open it up and removed the keyboard controller PCB.
Figure 1. (1) The PCB and contacts. (2) and (3) the matrix keyboard which you'll be dumping.
Examine the keyboard matrix before you dump it and see if you can find which contacts are the 'rows' and 'columns'.
Plug in the keyboard.
Go to http://www.keyboardtester.com/.
Using a wire select a row and jumper it to each column in turn while monitoring the online keyboard tester. If you don't find the Enter key on the first row then repeat the procedure for the next row, etc.
Connect the two Enter terminals to a phono socket and then you can plug in your original controller without modification.
I used the controller in the photo and the contacts were covered with some kind of coating which wouldn't take solder. It scraped off quite easily and I soldered the wires on.
I developed this technique to make a foot-operated switchboard for controlling a slide show while playing the guitar. I stopped testing when I found the combinations I wanted.
Table 1. The Lenovo keyboard matrix.
You could wire several of these controllers to send the key signals '1', '2', '3', etc., to a program or even to WordPad set to a very large font. You would see the keypress sequence on the screen as '213' or similar but if someone holds down the button you might get '21111113'. |
H: Missing soldermask on some pads of mcu
After designing a board in Kicad with an SMT32L0 on it, I sent it to seeedStudio and when I received it, I noticed that some of the pads on the stm32 are missing soldermask:
I checked the gerber file for the soldermask and there is no apparent hole in the stm32 pads :
I noticed that the pads missing the soldermask are tied together, but in two place there is also a pad that is not tied to the rest of the group that is missing soldermask which makes this impossible to solder by hand.
Upon inspection with a microscope, I noticed that the missing areas are all in the same shape which looks like this :
(blue is pads and copper and black is the cutout in the soldermask)
What happened with my pcb?
Here is the back soldermask layer with a gerber viewer :
with the back copper :
AI: Notice that the images of your soldermask simply show large rectangles which span multiple pins. I'm assuming that this is accurate, although perhaps if you zoom in you would see that they are actually smaller rectangles which just look like they're connected.
Anyway, if they really are large rectangles, this instructs seeedStudio to remove these areas completely. That is, as submitted, none of the pins should have soldermask between them.
It looks like seeedStudio's algorithms added soldermask webbing for you where it seemed appropriate, but the algoritm failed to know your intention for the bridged pins.
I haven't used KiCAD, but either your footprint doesn't have individual mask openings, or the software is set to gang them all together (likely in the Design Rules).
Regardless, these can be soldered with care, so you don't necessarily need to scrap the existing boards...
Good luck :) |
H: Voltage gradient across an antenna (and how it drives an AM radio)
Edit:
I was interested how is voltage gap maintained across an antenna. It seemed to me that an antenna might be like a wire where if there's a voltage at the top, the same voltage would be at the bottom, and hence no gradient and no current flow.
Several individuals explained to me that antenna is not a simple circuit element and goes beyond basic circuit analysis, and so there is no contradiction in it having different voltage at the top and bottom, which would lead to current flow.
One question that still bothers me:
Let's say that at one moment there's a positive voltage at the top of the antenna (10V relative to bottom, say), and current flows from top to bottom. But at the bottom where the antenna is actually connected, voltage is 0. That seems like the end of the journey, and it's unclear to me how the presence of a gradient in the antenna, also creates a voltage gradient in the circuit that drives the circuit elements?
On the other hand in the pic below it makes sense to me as the circuit is connected at 2 points in the antenna and there's a voltage differential between the 2 points that drives current to the radio circuit:
Old question:
In circuit sims and in theory, ground always maintains zero volts.
Thus if the point to which the ground is connected has more than 0V, some current will flow into the ground, while ground still maintains 0V despite incoming current.
But it looks like that in most real cases ground is merely a theoretical construct, and unless the circuit is earthed, the point labeled as "ground" can't actually maintain 0V, if the point to which the "ground" is connected in the circuit is above 0V, and current flows into the "ground".
Is there an element or some way to get "real" grounding, such that when the "ground" is connected to say a 1V point in a circuit, the "ground" will maintain 0V, despite current flowing from the 1V point to the "ground"? Otherwise the point labeled "ground" is just a loose wire that will quickly acquire the 1V of the point it is connected to, and not maintain zero volts.
My motivation is radio antennas where I'd like to have one end of the antenna connected to a permanent zero volts, so that when the top end of the antenna gets 1V, currently will flow from top to ground, while the ground maintains zero volts despite current flow.
on the left almost no current flows (just tiny bit to get other end of wire to match voltage of source), while on the right current flows steadily. If this was a real circuit, what can I do to make it behave like the one on the right, and not like one on left? I have antennas in mind, so that's why the simple solution of having 2 sided AC voltage source (i.e. like an AC 'battery') that maintains the voltage gap, like in the pic. below, won't help..
Thanks for your help!
AI: New Question:
To answer your newer question, I think you'd be better off studying a little bit about the distributed element model and transmission-line theory first. From the linked Wikipedia article (emphasis mine):
The distributed element model is more accurate but more complex than the lumped element model. The use of infinitesimals will often require the application of calculus whereas circuits analysed by the lumped element model can be solved with linear algebra. The distributed model is consequently only usually applied when accuracy calls for its use. Where this point is dependent on the accuracy required in a specific application, but essentially, it needs to be used in circuits where the wavelengths of the signals have become comparable to the physical dimensions of the components. An often quoted engineering rule of thumb (not to be taken too literally because there are many exceptions) is that parts larger than one tenth of a wavelength will usually need to be analysed as distributed elements.
If you were to excite your antenna with a low frequency source (or a DC source, let's say), then your intuition is correct: current would flow for a very short period of time until the voltage across the antenna was stabilized. But when exciting it with an AC source of a suitable frequency (i.e. a frequency where the wavelength is, let's say four times as large as the antenna, making the antenna a quarter-wave in length), the voltage will never be the same across the whole antenna, since the source is changing as fast as it takes the wave to reach the top of the antenna. In other words, there is no way for the bottom of the antenna to be fixed at a total of 0V, but it can have a DC potential fixed at 0V (with an AC potential swinging above and below 0V). This is what The Photon was mentioning in his comment talking about driving the antenna relative to the actual earth ground:
simulate this circuit – Schematic created using CircuitLab
Here, the DC source is 0V (i.e. it could be omitted), meaning your antenna's bottom end is fixed at earth potential on average. The AC source will excite a sinusoidal voltage at the bottom of the antenna, swinging from +1V to -1V at 1kHz. We'll assume the antenna is a quarter-wavelength (at 1GHz, that's about 3 inches long). I've also included the typical distributed model for a transmission line. That is how regular wires are modeled once the frequency of excitation becomes fast enough.
In your second schematic, with diode-detector circuit, you say the circuit is connected at two points to the antenna, but that's not true. The antenna is only connected to the node at the top-left of the circuit. The resistors are not part of the antenna.
Old question:
A quick answer: I think a \$\frac{\lambda}{4}\$ (quarter-wave) monopole antenna precisely describes what you're asking about. In the case of a monopole antenna, the end of the monopole facing the ground plane is actually a node of the voltage standing wave. Call that point GND, and that point will "maintain \$0\text{V}\$"; it is the node of a standing wave (i.e. no change in magnitude), and by calling it GND you've defined it to be \$0\text{V}\$, full-stop.
I'm still not sure I entirely understand your question, and schematics don't necessarily convey what you're trying to get across. By definition, ground is \$0\text{V}\$. Any part of your schematic that touches a GND is fixed at \$0\text{V}\$, since a wire in a schematic is defined to be a perfect conductor. For example, in your first picture, exactly zero current flows because there is no loop. GND is just a symbol we use to define where \$0\text{V}\$ is, and nothing more. You can't have a GND in a schematic that has any potential other than \$0\text{V}\$.
I think your confusion stems from the following:
But it looks like that in most real cases ground is merely a theoretical construct...
In all cases, ground is merely a theoretical construct. |
H: Antenna Design Procedures
I live in a residential area and I want to test and design an antenna in my garage. I plan on using my signal generator as the power supply and a RTL - RDS dongle connected to my computer as the receiver. The signal generator spans from 0 to 60 MHz. The amplitude has a range of 5 to 20 volts. My question is will my em waves interfere with my neighbors? If so what can I do to test designs and not affect the neighborhood?
AI: Consider buying an antenna analyzer. This is exactly their use case.
An antenna analyzer is a device that injects a low power signal into an antenna and measures complex impedance. This is done over a range of frequencies.
As an example, mine cost a few hundred US dollars and works up to 500MHz. There are many brands at different price points, each using different methods for accomplishing the task so it is worth reading reviews to understand what makes them special. Mine was a little more expensive, partially because of the extended frequency range and partially because it is capable of determining the sign of the complex frequency.
Regarding voltage level, asked in an above comment, that really depends on the antenna's radiation resistance. The parameter of interest is radiated power (v^2/R). I do not know what the FCC limit is, but it is likely below 0.05 watts, possibly far below. If R is 50 ohms (reasonable for a quarter wave dipole, but very low for a folded dipole) then max V is under 1 volt. |
H: Confused why outputs of a priority encoder could be X instead of 0 or 1
So this is the truth table given for the Priority Encoder:
and this is its logic diagram:
I am extremely confused with the part where the outputs x and y are labeled as X (where D0 D1 D2 D3 are all zero). First of all, how could an output be a "don't care" term? What that would mean is that when D0 D1 D2 D3 are all zero, x and y are either 1 or 0. But that does not make sense at all. How could an output have an indefinite value when we have definite inputs?
As you can see from the diagram, x = D2 + D3 and y = D3 + D1D2'. So if you put zeros to all inputs it is clear that x and y should be 0. Then why is it shown as X in the truth table?
AI: To me this is a by-product of all those logic synthesis programs (Quartus, Vivado, etc.) Because the priority of the V(alid) signal overrides the values of x,y outputs, it effectively gives the synthesis algorithms another degree of freedom when optimizing the logic (this is an extremely simple version of course).
In more complicated scenarios if you say in your truth table that x,y must also be 0,0, when V is 0, then the program may over-constrain the logic map and as a result use more gates, whereas if you have a truth table that only cares that V=0 when presented with all 0's inputs and x,y values don't matter you may save some gates (for other uses). |
H: Quantization: levels or intervals?
Assume a 2-bit midrise quantizer with (voltage,codeword) pairs (-3,0),(-1,1),(1,2),(3,3) where step size delta is clearly 2V. Now, calculating PSQNR using the intervals, we have 20log_10(V_peak/qnoise_peak) so Vmax = 3V or 1.5delta. Of course quantization peak error is delta/2 so I should have 1.5delta/(delta/2)=3 in the log.
Here's the problem: textbooks and almost every tutorial I saw take 2^qbits=2^2=4 inside the log function. I feel like the number of levels(4) gets confused with the number of intervals(3). I'm pretty sure I'm the mistaken one, can you explain me how?
AI: Peak signal to quantisation noise ratio (I think that's what you mean by PSQNR?) is a rather unusual thing to calculate for signal converters. If you are truly interested in the peak signal and noise, then that comes out as 1.5delta for your example, and counting the intervals is relevant.
What we normally do in signal processing (radio, radar, audio, that sort of thing) where the energy of the signal is all important, is to use rms. It's useful to think about the signal potentially going a little beyond the peak converter levels, and so counting the levels is more appropriate.
Of course, when you get to practical word sizes, N being say 4 or more, the difference between \$2^N\$ and \$2^N-1\$ becomes insignificant.
Be very careful you understand which measure, peak or rms, your various text books are using, and why. Also check whether they are just making a large N assumption. |
H: How does this fire alarm system transmit data on two-wire?
In the fire detection industry, analogue addressable systems use control panels and detectors (and devices such as interfaces) which communicate with each other by means of a protocol.
What protocol is this?
Can anyone help me find information about it?
AI: What protocol is used is impossible to say unless I have access to the technical documentation of the units and/or can reverse-engineer them.
We can assume that it is some kind of communication protocol suitable for one shared connection (no separate TX, RX, clock etc. but one bidirectional signal).
With some electronics it is possible to make data and power share the same pair of wires. This is nothing new, the old wired telephone system already used it!
Here are some links that show how this can be done:
Transmit Digital Signals and Power Over Same Wires
Using SN65HVD96 to Create a Power-Over-Data and Polarity Immunity Solution
The basic idea is that the power is DC and the data is AC. With some electronics (that can be as simple as an RC filter or a transformer) the two (data, power) can be combined and separated in each device. |
H: Removing/Subtracting Lead Resistance Using Op Amp Circuit
I'm working with RTD sensor, and need to use Op Amp circuitry.
Can someone explain to me how this Op Amp circuit work to remove the lead resistance?
So that Rw1, Rw2, and Rw3 vanish, hence the output from Op Amp Vout3 = IREFF * RTD
along with the calculation please if possible...
AI: I'll just give an intuitive answer with some numbers.
simulate this circuit – Schematic created using CircuitLab
Figure 1. The circuit redrawn.
We'll leave out RW2 and R7 for now since there is no significant current flowing to the + input.
With 1 mA flowing through the RTD the voltages are as shown in Figure 1. Note that the bottom of the RTD is at 2 mV and the top at 102 mV.
The + input is also at 102 mV.
The right side of RW1 is at 104 mV.
The op-amp is in an inverting mode configuration. That means that the output will move to that value that makes the - input equal to the + input (102 mV). Since the amplifier input is 104 mV and 2 mV is dropped across R5 another 2 mV will be dropped across R6 since both R5 and R6 are the same value.
The result is that the output is 100 mV which is the correct value for 1 mA through 100 Ω.
R7 is included to remove any errors due to op-amp bias currents. The - input bias current is feeding into two 100k resistors so a value of half of 100k is used on the + input. The missing 0.1 Ω is presumably to allow for RW2. |
H: Supplying power to PCB with USB voltage (2.5V) regulator
I am designing a normal PCB board. I want to put a USB voltage regulator to supply my PCB. I don't need to design the voltage regulator. I just want to add it to supply voltage in my PCB circuit. The output will always be 2.5V (not more than that). Which USB voltage regulator do you think will be good to add and is there any CAD model of that module so that I can add in my PCB design?
AI: After clarification from OP I understand that the supply voltage is from a USB, and it needs to be regulated down to 2.5V.
If this is the case, then it is easy enough. You can simply go to an online retailer such as Digikey, Farnell, Mouser or any of the others and look through their voltage regulator section.
They have parameter searches, so if you want a particular package due to space constraints, then you can select that. You can then view results based on lowest price first if cost is an issue.
If you know the current your load will be drawing and you know any other specifications that may be important, then put those in as well. The search results will show you all the available ICs that fit your specs.
If you don't have any constraints, and it is a simple enough project, then any old 2.5V regulator that accepts a 5V input will do. If you do have a pretty heavy load, then be sure to check the thing won't heat up too much! |
H: Writing to FLASH on MSP430F5529
I am having some trouble writing to FLASH on the MSP430F5529. I have gone through the examples provided by TI and read through the user guides and part datasheet, but have been unable to find the problem. My goal is to save the baud rate enumeration in FLASH and look it up on boot. Applicable code is below:
static const uint32_t baudRegister = 0x1980; // Baud rate register flash location (Info A)
//Change baud rate
void changeBaud(uint8_t baudEnum) {
eraseSegment(baudRegister); // Erase flash segment
writeFlashByte(baudRegister, baudEnum); // Write baud enumeration to FLASH
WDTCTL = 0; // Force uC reset
}
//Erase flash segment
void eraseSegment(uint32_t address) {
int8_t *flash_ptr = (int8_t *)address;
__bic_SR_register(GIE); // Disable interrupts
FCTL3 = FWKEY; // Unlock FLASH control regs
FCTL1 = FWKEY + ERASE; // Select segment erase
*flash_ptr = 0; // Dummy erase byte
FCTL3 = FWKEY + LOCK; // Lock FLASH control regs
__bits_SR_register(GIE); // Re-enable interrupts
}
//Write byte to FLASH
void writeFlashByte(uint32_t address, uint8_t data) {
uint8_t *ptr = (uint8_t *)address;
__bic_SR_register(GIE); // Disable interrupts
FCTL3 = FWKEY; // Unlock FLASH control regs
FCTL1 = FWKEY + WRT; // Enable writing of segment
*ptr = data;
FCTL1 = FWKEY;
FCTL3 = FWKEY + LOCK; // Lock FLASH control regs
__bis_SR_register(GIE); // Re-enable interrupts
}
changeBaud() is called from within main(), and it is passed an enumeration for the baud rate (example: 1 = 300bps, 2 = 600bps, ... 6 = 9600bps, 7 = 14400bps, etc).
When I run eraseSegment() the contents of the FLASH at address 0x1980 remain as 0xFFFF. I think this is expected (0xFFFF means cleared). However, when I run writeFlashByte() when I get to the line *ptr = data; it does not seem to do anything. While debugging I have confirmed that data has the correct enumeration value and ptr is the correct address, but for some reason writing data to *ptr does not load the memory with the value of data.
I am quite new to writing data to flash, so this one has me stumped. The code above is based on TI's examples available in MSPWare.
What might cause this problem?
AI: To erase and write to the Information memory, you have to clear the LOCKINFO bit in the FCTL4 register. This bit is usually cleared on reset already, but it doesn't hurt to check.
But information memory A is different (and 0x1980 is in that range). It has another lock bit LOCKA in FCTL3. This is set on reset and prevents erasing and writing to information memory A.
To clear that bit you have to write a 1 to it (counter intuitive).
I would suggest you use a different information memory location, as information memory A holds some calibration values from TI which you probably don't want to erase (if my memory serves me right, it's been a few years). |
H: What does "off-state output terminal voltage" in optocouplers mean?
I was looking at possible options of what optocouplers to use in my project. I was looking at these parts.
https://www.fairchildsemi.com/search/?searchText=MOC3023-M#N=&&Nao=0&&Ntk=All&&Ntt=MOC3023-M&&Ntx=mode+matchallpartial&&Nty=1&&showAll=true&&showHrd=false
I am looking for a zero-crossing optocoupler.
As I look into their data sheet, I came into the term "off-state output terminal voltage" I tried googling the term but it just redirects me to data sheets of other components. It just so happens that also this "off-state output terminal voltage" is the only major difference in the datasheets of the optocouplers that I see.
Can I get the MOC3163M that is rated off-state output terminal voltage = 600v even though my application is using 220 V?
AI: It refers to the maximum voltage you can have across the detector pin when it's turned off.
It is equivalent to the Vce max of a BJT or Vds max of a MOS.
Any value below it will work fine. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.