text
stringlengths 83
79.5k
|
|---|
H: How does a smart phone measure battery percentage?
A smartphone (assume it to be an Android) usually displays the battery percentage from 0 to 100%. I am assuming that this is the usable capacity of the battery. I have several questions:
How exactly does it measure the remaining capacity? Assuming a battery is rated to be 3.2V, it might be providing 3.3 V when fully charged and the minimum required voltage of the phone might be 3V. Does the 0-100% refer to 3V to 3.3V? Is this calibration made only once during the time of manufacture?
How is the remaining capacity measured so accurately? Are the measurements averaged over a period? If so, what is the sampling frequency and how many reading are averaged to get the final?
How is the remaining capacity % measured when the phone is charging? I guess the output voltage might be different when it is charging.
How is calibration of battery percentage affected by ambient temperature? I am aware that batteries nowadays have temperature sensors. Is the temperature used to calculate the battery capacity remaining or the optimum charging rate?
Assuming battery consumption is not linear as the battery discharges... (intensive games, etc). How does the smartphone handle variable discharge rates? There exists multiple apps to re-calibrate batteries. How do they work?
How does the OS determine battery usage of each app? Is it just based on CPU cycles and screen time or is there some actual measurement of power dissipation?
AI: As mentioned in the comments, it uses fuel gauge. There are several algorithms for Li-Ion chemistry, and the gauge might use one or more of them.
The most basic one is the Columb counter. The fuel gauge has a current shunt with an amplifier and measures the consumed current, sums it over time and compares it to the programmed battery capacity.
An addition to the Columb counter is impedance tracking, in which the fuel gauge tries to measure the battery's impedance. A lithium ion cell has a specific impedance for each state of charge, so an estimate can be made regarding the amount of charge left.
The fuel gauge usually has its own temperature sensor to compensate for various temperature related effects.
It reports its status via I2C (SMbus in some cases) to the main processor.
|
H: Compiling C++ on EFM32 microcontrollers
I would like to use some high-level libraries written in C++ for my EFM32 microcontroller. Unfortunately the Simplicity Studio IDE for the EFM32 does not seem to support C++.
I am a bit confused about whether it is still possible to develop in C++ for this microcontroller family. I am thinking that I could use the GNU ARM Embedded Toolchain to do this with a custom Makefile but I am not sure it could work and what are the arguments that should be given to the different GNU tools to compile and link for a given microcontroller (gcc, ld, g++, ...)
So first is it possible?
If yes, what are the high-level steps that must be done? I really would like to understand what is happening at compilation and linking time so do you have any resources on learning how to tweak a Makefile using the GNU Toolchain to compile for a given microcontroller?
AI: Of course it is possible to use GCC with a Cortex-M3 chip. (I assume the manufacturer has not deliberately kept something secret that you must know in order to do so).
In a nutshell, you must
get or write startup code (ctr0)
get or write a linkerscript
find out what special command line options you need (-nostartup etc)
find out how to download the application (probably as .hex file)
(optionally) write a makefile to automate the process
If you use any built-in libraries or code generation tools of the IDE you'll have to port that too.
As a start, check out any project that has combined GCC and EFM32. RTOSses like FreeRTOS and uCOSII are a good place to look. In a comment markt suggest coocox, also worth a look. It is much easier if you can start with a know-working set of scripts and make your own one step at a time.
Been there, done that, but for the NXP LPC1114/LPC810: check www.voti.nl/bmptk At the moment (december 2014) I am reworking it, but the .zip should work and can give you an idea of what is involved.
|
H: RGB LED audio spectrum analyzer - Bandpass or FFT?
I'm about to build a coffee table which has a RGB LED matrix build in it just like in this video.
https://www.youtube.com/watch?v=4NJDgvV8RZw
I want the matrix to display an audio spectrum that should be calculated in real time from a variable audio source over a 3,5mm audio plug. ( e.g. smartphone, PC, mp3 player )
It sould also just display random and/or programmed visuals patterns. ( not related to the audio input )
So i thought using an AVR microcontroller as the main unit of this project.
The mcu will be programmed once, and does not have to communicate with other devices.
Buttens will be used to switch between modes and/or annimations.
So the basic idea is there, but there are a few things that I'm not shure about:
Should I build a bandpass filter board to get the spectrum data and process it from there on with the mcu or go for the FFT- ( complete-software- ) solution ? Which one is faster / more effective ? Does the speed difference matter in this kind of project ?
( Yes I know there are a couple of questions with the same topic, and I've read a few of theml, but I think the answers wheren't specific enougth for me )
How do I choose the amount of bands for my spectrum analyzer ( collumns of the matrix ) ? Does it matters how many bands I choose ( since I want to design the RGB LED matrix driver board and the table with its dimensions based on my selection ) ?
Thanks for your help and please exscuse my bad english.
AI: Personally I would go (and have in the the past gone) the FFT route. However I wouldn't use an 8-bit AVR, I'd use something somewhat faster.
My personal weapon of choice is the PIC32, which has more than enough power to do a good job of the FFT, but there are other good choices besides that - such as one of Atmel's 32-bit ARM chips (SAM3X for instance, like in the Arduino Due), or many of the other ARM chips that there are out there.
The trick with FFT is you need a good fast sample rate and enough memory to store a complex sample buffer. Say you want 1024 samples (which would give you 512 FFT buckets to play with), at 16 bits per sample, plus 16 extra bits for the complex FFT component, you're talking a minimum of 4KB of RAM just for the sample storage. Also, if you want it to be smooth then you want to be using DMA to read the samples in to one buffer while running FFT on another buffer, so a ping-pong double-buffer would increase that to 8KB.
With FFT you get half the number of buckets, or frequency ranges, as you have samples. The frequency range is also half the sample rate. So if you have 1024 samples recorded at 48KHz, that gives you a frequency range of 0-24KHz, with (24000/512=) 46.875Hz per bucket.
Reducing to 128 samples would give you 64 buckets, each at 375Hz per bucket.
|
H: What is a DC DC controller?
I have heard about DC DC converters and I understand what they are but what is a DC DC controller? What is the difference between the two? Like this LM5085, I cannot understand the major difference between the converter and controller. Hoping for an explanation.
Thanks in advance!
AI: A switching regulator "controller" implements the logic portion of the overall converter inside the IC but does the heavy current switching using one or more external high current and/or high voltage semiconductor switches.
This allows the power able to be handled to be adjusted to suit the application by using more or less capable switches as required, and often achieves overall performance levels which are not practically achieved within an IC of a given size. The overall result is that the task is partitioned into sections which are best handled by components which are usually able to be price and size optimised compared with using a single IC solution.
The LM5085 that you cite Datasheet here - has a 4.5V to 75V input voltage range. Providing a power IC at this voltage rating is usually expensive.
A typical application circuit (from the data sheet) is shown below. This is rated at about 60 Watts (5A at 12V out, 5B shown on diagram) but by changing Q1 and L1 any sensibly desired power level can be achieved. The ICs cost just under $1 in 1000 quantity at Digikey
|
H: How to run ATmega328P-PU using 3.7v Lipo 600mAh?
I want to run bare ATmega328P-PU using 3.7v 600mAh Lipo Battery. The Lipo is used to run Nano Quadcopter. The discharge rate is 20C.
My Question is:
1. Will connecting Lipo to my atmega directly fry up my chip?
2. Do I need any Voltage regulator?
AI: Page 2 of your datasheet:
Operating Voltage:
1.8 - 5.5V
So yes you can connect it directly without a voltage regulator
|
H: Reading GPIO on TM4C123G board
With the TM4C123G board, I'm using Code Composer Studio v6. I want to read the GPIO pins that are connected to the user switches on the board (Pin F4 and Pin F0). The code at first did not work but after a search on the Internet it worked after I added this function:
GPIOPadConfigSet(GPIO_PORTF_BASE, GPIO_PIN_0 | GPIO_PIN_4 , GPIO_STRENGTH_2MA , GPIO_PIN_TYPE_STD_WPU);
I opened the TivaWare documentation, I kinda got an idea on what this function does. But my questions are:
Why to specify the drive strength for the input pin ?
And why should I choose between pull up or pull down resistors ?
AI: Choosing pull-up or pull-down for an input pin is a good idea if it is connected to a signal that can become tri-state / disconnected (like switch). In this case an input without a pulling resistor will have an undefined value. If it is pulled down, it will have a value of 0 when disconnected. If pulled up - it will be 1. As for the strength, it depends on your tradeoffs. Generally, the lower currents - the less energy you circuit consumes and cooler the components are getting. But if the pin should source/sink a higher current (to drive a LED, for example), it should be set to the appropriate current value. For the Input pins there is no effect of the strength parameter. As from the TivaWare driverlib user guide:
For pin(s) configured as input ports, the pad is configured as
requested, but the only real effect on the input is the configuration
of the pull-up or pull-down termination.
|
H: Meaning of TLC59281 LED supply voltage rating
TL;DR: Does the max VO of 17V refer to the LED supply voltage or only to what's left after the diode drops?
TI's TLC59281 is more or less a device that combines two 74HC595-like shift registers with constant-current LED drivers.
The datasheet advertises the feature "LED Power-Supply Voltage up to 17V"; in the stats, this seems to correspond to VO, "Voltage applied to output", which is 17V max.
Ignoring LED failure modes, which I'll admit I may not have considered thoroughly, these sound like conflicting figures: the full LED supply voltage is never actually applied to the driver output. The max LED supply voltage seems like it should be 17V plus the sum of all of the LED drops. For example, if I have a string of 10 white LEDs with Vf of around 3.6V each, it should be perfectly acceptable to supply them with 48V (because 48 - (3.6 × 10) = 12 < 17).
(A string driver that only supports 4 LEDs per string doesn't seem all that useful…)
AI: 17V really is the maximum voltage which can be used to supply the LEDs with this IC.
When an output is turned off, then the LED supply voltage is applied to that output pin (since no current flow means no voltage drop across the LEDs).
This IC isn't intended to drive long strings (at least not without a little extra voodoo on its outputs) - its not a 'string driver'.
|
H: PCB Layout, Functional vs Actual Pin Order
I'd like to continue this conversation here:
CAD Schematic pin arrangement approach
Basically, there are 2 major ways to do schematic symbols, either with true pin order or by grouping pins by their functions. More people seem to advocate functional groupings over true pin order because of the clarity it can give your schematic.
The first PCB I did a few years back I used functional groupings on the schematic, then when I switched over to do the PCB, the number of crossings and vias made it a complete mess. Since then I've made every single schematic symbol true pin order, then focused on making sure as few lines on the schematic cross, meaning minimizing the crossings on the PCB. This has produced very clean and efficient boards, however, I don't think my method will scale well when doing very large boards.
For those of you that use functional pin groupings, how the hell can you produce an efficient PCB without too many crossings or vias? For instance, PORTB on an ATmega could be spread out on all 4 sides of the IC. If you group them together as a functional block you may assign them to one device then have an absolute nightmare when you switch over to do the routing. This easily could've been avoided if you noticed: "Hey, PB0, PD7 and PD6 are all right next to each other, maybe that'd be the better choice."
So I'm not trying to convince anyone of anything, I just recognize that the more professional route is functional pin groupings, but I simply don't know how to translate that to an efficient PCB.
I've included one of my schematics as an example of how I do it. This board was 1 square centimeter and so every single via was major real estate.
Thanks in advance.
AI: This is a great example of why the same engineer that designed the circuit should do the layout and routing. The schematic is for communicating the circuit, so pins should definitely be arranged functionally without regard to physical pin order. This is much better for most uses of the schematic.
However, layout and routing is not one of those uses. The solution is to see this as part of the layout and routing problem, not something to be pushed back on the schematic. Most of the time, pins of a port of a microcontroller for example, are near each other. But sometimes not. You have to look at the ratsnest when doing layout, and see if there might be ways to simplify that without impacting the circuit. Yes, you might end up going back and forth between the schematic and the layout, and moving nets around between pins.
Fortunately, this sounds like a bigger deal that it really is. Most of the time, arbitrary connections can be accomodated well enough. Sometimes you have a really tight board with few layers that is really cost-sensitive. In that case, you spend the extra engineering effort to swap things around to simplify the board. Again, that's the unusual case. Most products aren't high enough volume to justify this level of optimization during engineering.
Added
I should have mentioned this earlier, but for dense designs I take some care assigning pins up front. I recently put a 64 pin microcontroller on a 4-layer board. With so many connections in a tight space, routing within a inch or so of the chip is a serious issue. It's easy for signals to get blocked in, requiring significant re-routing of other signals.
What I did was print out the pinout diagram from the datasheet as large as possible. Then I wrote labels around the chip indicating in what directions the various other subsystems would be on the board. For example "EEPROM" at top right, display processor at top left, service port at lower, left, etc. I also created a list of all I/O signals required by each block.
To assign pins, I first crossed off those that had to be fixed. Some of the subsystem positions were suggested by the fixed pin assignments. Yes, this is a iterative process. Once that was all set, I started assigning nearby pins to the I/O lines of the various subsystems. You want to do this in pencil since this is also somewhat iterative. For example, you may find that you should have started assigning the UART pins more to the left, since other things have higher demand to the right and you are running out of pins there.
For a micro this complicated, I dedicate a whole sheet just for the immediate connections of the micro. This shows the power, ground, bypass caps, crystal, programming header, and the like. The I/O connections are just named lines that go to other pages in the schematic. This page is labeled something like "Main controller". The next page perhaps "Main controller peripherals", which shows the things connected to I/O lines that don't need to go anywhere else in the schematic. Examples might be the external EEPROM, the status LED, a relay with contacts brought out to a customer connector, etc.
Note that the above method requires some idea of the layout before assigning pins. Again, this is a iterative process. In this case, I left the flexible I/O signals unconnected to the processor when starting layout. I chose locations for subsystems according to where they connected to the external world, where they would be out of the way when their location didn't matter, etc. The processor was then oriented based on where the fixed things around it, like the crystal and load caps, could fit best. That's when the process described above was started.
This was a rather extreme case. Most of the time I don't need to be this deliberate about assigning I/O pins. For smaller processors where there is less routing congestion, I usually just assign pins in the schematic, then deal with it in the layout. That may mean a few extra vias, but for many boards that's not a big deal.
|
H: How does water damages the phone lines?
Today, we just replaced the phone line which was damaged due to moisture and were causing a great deal of disturbance and noise in the phone. What exactly causes this noise? Does the signals gets broken by the water entering the phone line?
AI: Too long for comment:
To expand on @passerby a bit - the noise you hear is the result of the water forming a partial short-circuit between the two wires that come to your house from the telephone exchange; because of various chemical effects due to electrolysis, gassing, etc this is a lot more noisy than bridging the line with the equivalent resistor. Often the characteristics of this partial short are such that it will 'ring on' - ie application of the higher-voltage AC ringing will cause the resistance to drop; this appears to the exchange equipment as if you have answered the phone, and it will stop the ringing current and open the transmission circuit; this kind of fault is called 'Ring Trip' in (British) telephone engineering jargon.
|
H: For embedded code, why should I use "uint_t" types instead of "unsigned int"?
I am writing an application in c for an STM32F105, using gcc.
In the past (with simpler projects), I have always defined variables as char, int, unsigned int, and so on.
I see that it is common to use the types defined in stdint.h, such as int8_t, uint8_t, uint32_t, etc. This it true in multiple API's that I am using, and also in the ARM CMSIS library from ST.
I believe that I understand why we should do so; to allow the compiler to better optimize memory space. I expect there may be additional reasons.
However, because of c's integer promotion rules, I keep running up against conversion warnings any time I try to add two values, do a bitwise operation, etc. The warning reads something like conversion to 'uint16_t' from 'int' may alter its value [-Wconversion]. The issue is discussed here and here.
It doesn't happen when using variables declared as int or unsigned int.
To give a couple of examples, given this:
uint16_t value16;
uint8_t value8;
I would have to change this:
value16 <<= 8;
value8 += 2;
to this:
value16 = (uint16_t)(value16 << 8);
value8 = (uint8_t)(value8 + 2);
It's ugly, but I can do it if necessary. Here are my questions:
Is there a case where the conversion from unsigned to signed and back to unsigned will make the result incorrect?
Are there any other big reasons for/against using the stdint.h integer types?
Based on the answers I'm receiving, it looks like the stdint.h types are generally preferred, even though c converts uint to int and back. This leads to a bigger question:
I can prevent the compiler warnings by using typecasting (e.g. value16 = (uint16_t)(value16 << 8);). Am I just hiding the problem? Is there a better way to go about it?
AI: A standards-conforming compiler where int was anywhere from 17 to 32 bits may legitimately do anything it wants with the following code:
uint16_t x = 46341;
uint32_t y = x*x; // temp result is signed int, which can't hold 2147488281
An implementation that wanted to do so could legitimately generate a program that would do nothing except output the string "Fred" repeatedly on every port pin using every imaginable protocol. The probability of a program getting ported to an implementation which would do such a thing is exceptionally low, but it is theoretically possible. If want wanted to write the above code so that it would be guaranteed not to engage in Undefined Behavior, it would be necessary write the latter expression as (uint32_t)x*x or 1u*x*x. On a compiler where int is between 17 and 31 bits, the latter expression would lop off the upper bits, but would not engage in Undefined Behavior.
I think the gcc warnings are probably trying to suggest that the code as written is not completely 100% portable. There are times when code really should be written to avoid behaviors which would be Undefined on some implementations, but in many other cases one should simply figure that the code is unlikely to get used on implementations which would do overly annoying things.
Note that using types like int and short may eliminate some warnings and fix some problems, but would likely create others. The interaction between types like uint16_t and C's integer-promotion rules are icky, but such types are still probably better than any alternative.
|
H: Gold coated pads
Is there any difference between a gold coated pad and a gold pad? I have been ordering boards from OSH park, they are really good quality, I'm just wondering what benefits this has over silver or bare copper pads.
What are the benefits of usign a gold coated pad? I do know it must have better conductivity and proably better or lower resistance but, does coating really achieve all this? doesn't the material below the coating affect the conductive properties of the tracks?
AI: The main purpose of the coating on the pads is to prevent corrosion.
Copper oxidises very easily in a normal atmosphere - it goes green. By adding a coating on the top you are sealing the copper away from the atmosphere and protecting it.
You have a choice of coating, and coating methods, and each has its own benefits.
There are basically 3 methods:
Hot-Air Solder Levelling (HASL)
Electroplating
Chemical plating
The most common "silver" one you see is HASL. This basically involves covering the pads with a thin layer of solder and using hot air to reflow it and level it off.
Chemical plating - dipping the board in a tin solution - is the most common "home brew" method since it is easy to do and the chemicals are easily available.
Electroplating is usually used with gold. Similar to silver and gold plated jewellery. It's very very expensive.
The gold chemical plating (ENIG) is the most common way of depositing gold. It is not as expensive as electroplating, but it does still give very good results. Not only does it give a good clean low resistance contact, but the perfect flat surface is also good for pick and place machines - especially with fine pitched leaded surface mount components (such as TQFP) where the slightly raised pads of the HASL can cause misalignments to happen.
HASL is cheap though, and still gives "good enough" connections.
Also the gold pads just look so much cooler.
|
H: uC Datalogger to GUI for dataprocessing
So I looked around electronics.stackexchange for this one and here is where I landed:
What type of USB Protocol for Datalogger
Basically, I would like to sample a voltage (specifically, from a photo diode through a trans-impedance amplifier) and send it to the PC. Though, I just don't want to send it to a hyper terminal, I want to graph it in my own GUI and do fun things like showing a FFT of the data. So I see two parts to this project; microcontroller and GUI.
Thing is, I am not sure where to start. The microcontrollers I have experience is with BASIC STAMP 2, PICAXE, and some Arduino. But I want to use something that would be more popular in industry. So I decided that I would perhaps go with the dsPIC series and learn it through mikroC's online book.
I was going to learn to use HID support on the dsPIC and send it to the PC (I know, easier said than done). I read info on the internet that CDC is better since it is not interrupted but I like the idea of not having to download drivers. With a quick google search, I see that other companies make HID dataloggers so it must be okay?
As for the GUI, I have basic academic experience with C++ and Java. I have used Java Swing to make some desktop GUI applications but they were fairly simple compared to what I want to do here. I am also not sure how to "read" the USB port for data to use as inputs for my GUI. I was thinking that I was going to use (well, once I learn it) C# since I read somewhere that it is fairly easy to grab data from a USB port for a program and show in a GUI. I was going to start with the GUI part and attempt to use dummy data like a mouse to make sure it works, maybe even use an actual datalogger for testing. Then muddle with the dsPIC part.
So I guess my bottom line question is am I approaching this the right way? Is what I proposing even possible?
AI: If you are sending serial data to your computer over serial, USB or not, you just need to write computer programs that can parse the incoming messages. You can refer to books on compiler construction about how to write the parser and operating system manuals about how to interface to the serial interface.
If you need faster speeds, like full USB 2.0 480Mbps you can look into libusb (for OS X and Linux), IOKit (for OS X) and Windows DDK, as well as manual your microcontroller, about how to send raw USB packets that can take advantage of the full USB bus speeds.
|
H: Op-amp: amplifying DC bias problem in output?
I'm trying to do a simple 1 kHz, 1 V p-p sine wave amplification circuit using a TL071 or similar op-amp. The simulation is currently in LTSpice.
I've biased the VCC to half to try to achieve a good voltage swing but the waveform clips; it seems that the op-amp is amplifying my DC bias also.
I don't expect this op-amp to be rail-to-rail, of course.
First, here are the output waveforms:
V(out) is the small output signal, red is the AC sine wave, green is the biased waveform, centered at VCC/2.
Here's my circuit:
Any help? I'm new to this world of op-amps and struggling a bit.
AI: The 'ground' on R3 becomes the 'zero reference' for the amplification. This should be connected to Vcc/2. Otherwise you will get strange results (what you are seeing).
There is some good information here about how to generate a virtual ground for this type of single-supply circuit: http://tangentsoft.net/elec/vgrounds.html
|
H: PCB Tinning to increase current
How can I design a PCB in order for it to have this kind of PCB Tinning finish?
I do know this decreases the resistance of the tracks and increases the amount of current it can handle however
Do I need to leave the solder mask below and do it by hand?
Do I need to remove the solder mask ?
Can this process can actually be done at the fab house?
Edit 1-----------------------------------------------------------
Now an interesting question, Is there any kind of design rule to follow using this HASL finish? , I mean the rule its 40mils per Amp however if I get a 40mils HASL finish track, can I actually put 2 Amps on that?, if the answer is yes, isn't the solder conductivity lower than the copper itself?
AI: While DrFriedParts provided an answer to the question you asked, I feel I ought to respond to your premise, instead. Specifically, "i do know this decreases the resistance of the tracks and increases the amount of current it can handle", while technically true, is not a reason to specify HASL. Let us consider. A 1 oz copper trace has a thickness of 1.37 mils, and a bulk resistivity of ~0.017 uohm-meter. A typical HASL layer is 0.1 to 0.3 mils, and the bulk resistivity is ~0.17 uohm-meter. So HASL will add about 15% to the cross-sectional area of the trace, but with a resistivity about 10 times greater than the trace provides. The result of this is that the resistance of the trace is reduced less than 2%, and the current-carrying capacity is increased by the same amount.
Trust me, if you absolutely must have the extra 2% you are doomed.
HASL is a perfectly reasonable finish, but is does not do anything noticeable to your current-carrying capacity.
|
H: Circuit Requirements for a 30 min continuous on off cycle
I would like to build a circuit (preferably using components from old common electronics like a VCR or whatever) that continuously cycles on/off in ~ 30 min intervals. Input voltage is from a 12v groundless DC inverter that's plugged into the wall. The 30 min on/off cycle is to power a 12V 0.23amp computer fan. A period of 60 min with a 50% duty cycle would seemingly work fine here.
The optimal answer would include additional information on how I could modify the circuit (if needed) to increase the time delay. Thank you in advance!
AI: Here is a simple circuit for you:
simulate this circuit – Schematic created using CircuitLab
And program the ATtiny85 to flip the PIO port on and off once per 30 minutes. Arduino software is more than good enough for this.
My personal favorite microcontroller for this kind of task is AVR, but any microcontroller can do this.
Even being a 8-pin device, ATtiny85 (or the PIC the other answer suggested) still have enough IO lines to allow an external device talk to it, changing both the frequency and duty ratio.
If you don't want to use a phone charger powering this a fully charged 18650 Li-Po can keep it running for months if not years.
|
H: What would be the best microcontroller for running PID calculations?
I need a microcontroller for my line following robot.
Which one would be the best considering a price below $30 and preferably >120MHz clock?
AI: You can run PID on just about anything, it's mainly a question of how quickly you need to update. However, 120 MHz is overkill for a line following robot, unless it has to be doing a ton of other processing.
|
H: 4518 and Arduino
First off, disclaimer; I'm a computer scientist in the progress of my first baby steps in electronics.
I'm trying to hookup a 7segment with a 4511, 4518 and an Arduino.
The 45-series chips and the 7segment are receiving 12v, while the Arduino is getting 5v.
I've confirmed that each part of the system works as intended in isolation, i.e. when clocking the 4518, it increments the binary output, which is correctly feeded into the 4511 and to the 7segment.
The Arduino does nothing, but a square wave currently, my questions are regarding the connection between the 4518 and the Arduino.
Can I safely connect the Arduino GND and the 12V GND?
Can I connect the 5v Arduino output pin directly to the 4518's input clock pin?
If (answer(Q2) == false): How should I do the connection?
(Assuming answer(Q1) == true): Via a transistor?
(Assuming answer(Q1) == false): Via an optocoupler?
I havn't decided on the source of my 5V and 12V, but for the above questions, let's assume an ATX power supply (i.e. molex {5v,GND,GND,12v}).
AI: yes - you must connect all the grounds together.
No - the Aduino's 5 volt High (probably) won't be recognized as a high by the 4518 running on 12 volts.
Use an NPN transistor and a pull-up resistor:
simulate this circuit – Schematic created using CircuitLab
Note that the transistor will invert the signal - a high output from the Arduino will be a low input to the 4518.
|
H: H-Bridge vs Mechanical Speed Controller
What's the difference and trade-offs between a h-bridge motor controller and a mechanical speed controller for controlling a brushed motor in an RC car?
I have a cheap RC car that I'm trying to retrofit with an Arduino to convert it into a semi-autonomous robot. The car comes with a cheap mechanical speed controller, which seems to be essentially a rotary potentiometer mounted on top of a servo. The controller is also wired into an enormous resistor housed in an aluminium casing. It's motor is a brushed RS-540SH, which the specs say has a stall current of 42A, which I'm guessing is why it has such large resistors.
I could easily wire the mechanical speed controller's servo into an Arduino, but would there be any benefit to using a h-bridge motor controller instead? Based on what I know about each, the h-bridge benefits would be:
completely solid-state, no slow servo or unreliable potentiometer involved
ability to electronically brake the motor
while the benefits of the mechanical controller would be:
it's already there and installed, so I don't need to do anything
only uses a single DIO pin on the Arduino (vs 3 with a h-bridge)
completely isolates the motor from interfering with all other onboard electronics
Am I missing anything?
I have a L298N based h-bridge on-hand that's rated for well over the motor's maximum voltage, but it can only handle at most 2A, which is much less than the motor's 42A stall current. Would that be safe to use, or would I be at risk of burning it out my h-bridge?
AI: More advantages of using an electronic controller:-
Much finer speed control (mechanical controllers only have 3~4 steps)
More consistent torque and rpm at low speed (PWM provides better voltage regulation than a resistor)
Safer. Motor will not start if there is no signal.
More efficient at part throttle (resistors waste power!).
The L298N is an old design that is inefficient and much too weak for your motor. You should either use a commercial ESC designed for RC cars (eg. Tamiya TEU-105BK) or build your own H-Bridge controller with FETs rated for a continuous current of 80A or higher.
|
H: does 0,7V supply on 5V devices damage the device?
In a certain state of my circuit there will be 0,7V on the supply pins of certain IC's (comparator & current sensor). Of course the IC's won't work on this voltage but does this damage the IC's or can I ignore it? The voltage is caused by a diode (it's leaking some current probably) & a dc-dc converter.
AI: Yes and no.
The supply voltage itself shouldn't cause any damage, no. However, any signals being fed into the chips which are greater than 0.7V could cause damage.
For instance, many chips have input values listed in their datasheets as "\$V_{DD}+0.3V\$", which means that for chips like those, any voltage greater than 1V applied to its input pin when being powered from 0.7V will be out of specification and likely to cause damage.
So check the datasheets for your chips, and your circuit, to make sure that you're not going to fall foul of that.
|
H: Is controlling a voltage regulator's EN/SHDN terminal directly by 3.3V microcontroller possible?
Let's take the LT1763. The datasheet says that "The SHDN pin can be driven either by 5V logic or open-collector logic with a pull-up resistor."
On the other hand the chart "SHDN Pin Threshold Off-to-On" says about 0.7V at 25*C temp suggesting 0.7V shall be enough to drive the SHDN pin high.
The question is whether 3.3V logic is enough to drive this line reliably? If not, which chart should I study?
I need the device to operate normally off and enable it from the microcontroller on demand. If the 3.3V cannot drive the SHDN pin correclty then I'll need to use an NPN and put the chip power supply between Vin and NPN's collector (for GND).
AI: Yes, 3.3V should work. The datasheet says:
So as long as you can drive the \$ \overline{SHDN} \$pin lower than 0.25V when turning it off and higher than 2V when turning it on, it should work as expected.
|
H: 555 timer one-shot 30 second light LED strip, TOTALLY CONFUSED!
I'm working on a personal project and I am at my absolute whits end here. I have a configurable N.O. or N.C. switch (nteinc Magnetic Alarm Reed Switch) so I'm not too concerned with which way I use it.
My goal is as follows:
When the two parts of the switch are separated from each other an LED strip turns on for 30 seconds and then turns off until the switch pieces are brought together again and then pulled apart again at which point it would then turn on for another 30 seconds...repeat indefinitely.
What I've tried:
I've calculated that I'll be using 1Mohm resistor and 22uF capacitor to generate a 24.2 second pulse (which is close enough to 30 for me right now). I've attempted to hook up the following circuits
http://electroschematics.com/wp-content/uploads/2010/11/555-timer-positive-trigger.jpg
http://forum.allaboutcircuits.com/attachments/ne555-power-on-one-shot-png.38684/
and a few more this system won't let me link to
but they do not perform as expected and either only turn on/off the led strip by the switch without the timer or they run indefinitely..
Power Supply: 12v worth of AA batteries
LED Strip: 9-12v (preferably closer to 12v)
LED Strip: 30mA
I'm not entirely sure what I'm missing here but I would greatly appreciate an education as well as reference circuit. I think this has to be very common because it's almost like an intrusion alarm where if the window is opened an alarm sounds for a period of time so I'm not sure why I'm unable to get this to behave the way I want to.
Thanks in advance! You'll be saving my sanity here, I've been at this all day.
AI: This will work for you:
Since you can configure your switch to be either Normally Open or Normally Closed, configure it (S1) so it's normally open, and when you push it closed the 555 will generate a 30 second long pulse which will light the LED for that time, no mtter how long or short the time you keep S1 made.
The 555 needs to see a low-going trigger pulse which stays low for less than the timeout period, and C1 differentiates the low generated when S1 pulls R1 down to ground into the short pulse the 555 wants to see on its trigger input.
R3 and C2 set the timeout period, which is 1.1 R3C2, and with a 20µF cap in there about halfway through the pot should get you the 20 second pulse you want.
C3 is the bypass capacitor for U1, and it's important that it be connected across U1 pins 1 and 8, and as close to the package as possible.
R4 and C4 comprise the POR (Power-On-Reset) circuit for U1 and, by holding the RESET pin momentarily low while the rest of the circuit is coming to life, it forces the 555 to power up in a known state and with the output low.
R5 is the ballast resistor for the LED strip, and drops the 555's output voltage enough to limit the current through the LEDs to about 30mA. That is, unless the LED strip has its own internal ballast, in which case R5 can be eliminated and the strip connected directly across the 555's output and GND/0V.
BTW, here's the LTspice circuit list so you can simulate and play with the circuit if you want to.
|
H: A relay for each terminal (live and neutral)
I'm investigating a control circuit that switches a heating element. From what I can see there is a relay on both the neutral and live to the heater. The relays are controlled together and cannot switch independently.
Is this common for controlling a heating element? What is the advantage of this over just a single relay across the live?
AI: Yes it is normal to switch both poles of a mains circuit. That way, if the circuit is incorrectly wired (or plugged in, where 2-pole plugs are reversible!) you can be certain the circuit is not accidentally live when it is switched off.
What is NOT normal is to fuse both poles - if fuses are fitted, they should ONLY be in the live circuit, for hopefully obvious reasons.
|
H: Uses for gate expressions, Nor and Xor
I'm not generally confused by Electronic Engineering, but Nor and Xor are the Gate expressions that like to break that Rule.
I can't find proper answers to this question, as they are either:
Too Complex. Or too Uninformative.
No and Yes, I and O are quite simple, but i stumbled across the two extra expressions I've never heard of, Nor and Xor.
I'm quite new to this kind of stuff, so Nor and Xor are also new, and quite confusing.
Some clarification on the uses of Nor and Xor would be appreciated.
AI: Truth tables:
NOR (NOT-OR) is more or less like NAND:
0 NOR 0 = NOT(0 OR 0) = NOT(0) = 1
0 NOR 1 = NOT(0 OR 1) = NOT(1) = 0
1 NOR 0 = NOT(1 OR 0) = NOT(1) = 0
1 NOR 1 = NOT(1 OR 1) = NOT(1) = 0
XOR emits 1 when the two inputs differ. You can consider this as binary addition without carry.
A XOR B = (NOT(A) AND B) OR (A AND NOT(B))
0 XOR 0 = 0
0 XOR 1 = 1
1 XOR 0 = 1
1 XOR 1 = 0
|
H: 0,5m led strip (15 leds) Phone Power Supply
I have a 15 led 0,5m led strip (ws2812b). Since it's such a limited amount of leds, I'll try to power the strip with an old phone power adapter (5v / 750mA). I looked at the ws2812's data sheet and it says
Each RGB LED draws approximately 50 mA at 5 V with red, green, and
blue at full brightness
So that's 15 x 50mA = 750mA
I'll call that one tight, but won't it be all right because these leds are never gonna run full power all the time.
AI: This, you will have to try. Try run it full brightness for a while and if your phone charger get hot or the voltage drop is steep, you may need another power supply.
Cheap phone chargers capable of 1A is quite common out there now. Look those up.
|
H: How to program a msp430 without bootloader or ISP connector?
I'm looking for a way to program a msp430 microcontroller in-circuit and with all the components SMD, including the ISP connector. Could, in this case, USB still be an option?
AI: The MSP430 launchpad and other TI's evaluation boards have a proprietary JTAG emulation circuit onboard, which is consisting of a USB connected microcontroller on one side, and connected to the target device's programming interface on the other. As said, the firmware for this emulator is proprietary and closed, so the only way to use it is by connecting such a board to your circuit. Or build a JTAG emulator yourself.
|
H: Is soldering extra wires to increase current capacity on a pcb trace good design?
I was working on a bunch of PC PSU organ donors and noticed that on lots of them through all the wires that lead the power to the outside there were wires soldered, and lots of solder added. This is probably to increase current capability.
I only today thought about it, so did not take any photos, but it looks similar to this:
Now an ATX PSU looks quite crowded and there is probably not enough place for wider traces, so I was wondering if this is acceptable/good design or if I should avoid it and go for proper traces where I can; and why should I do what.
AI: Soldering wires onto a PCB like that definitely increases current capability. Whether that's good design we can't of course answer without knowing the design criteria.
Downsides of this method are that it takes significant manual labor in production, therefore will reduce yield because people make mistakes, cause more unit to unit variation than othewise, and won't be as corrosion resistant because there won't be a layer of solder mask over the conductor.
Whether these tradeoffs are "good" or not depends on factors we can't know. For example, if this board will be manufactured in a part of the world where you can pay someone a bowl of rice and $.50 a day, then it makes a lot more sense than where you have to pay US minimum wage.
|
H: What is the difference between processor clock and system clock?
In the TM4C123G Launchpad Workshop page 129, there is a note about power modes. This note differentiate between processor clock and system clock. So what is the difference between them ? Is this a general concept or is it specific to the TM4C123GH6PM micro-controller ?
AI: The processor clock is derived from the system clock, but for the mode you are mentioning it is just "disconnected" from the processor core. So the system clock is not stopped (in the Deep Sleep it is stopped as well), it is just stopping feeding the core.
|
H: Will GPIO still be available when connecting a BoosterPack on any Launchpad?
I had this question in mind while exploring the BoosterPacks for the TI Tiva C series launchpads. Let's say you connected KenTec display to the board, you still have access to the female headers on the bottom of the board. The question is when the display is operating and functioning, will you still be able to use any GPIO for other external circuitry or systems or anything ?
The question applies on any other boosterpack.
AI: Any except the ones used by the booster pack. Booster packs are intended to pass through any unused pins. Most packs have schematics available to tell you which pins are used.
|
H: Phase Response Measurement
I have built a few analog filters (LPF, HPF, etc), after the filters, there are some amplifiers, MUX, and it goes to a ADC. For this project I am suppose to have as much of a linear system as possible. So I am wondering how I can measure the phase response of this system (0-60KHz, from the input, which could be a coil or some type of sensor to the ADC) with a 200MHz scope, and some cheap function generators (cannot afford spectrum/network analyzers) so that I can try to build lattice filters or use the DSP afterwards to try and fix the phase response.
Thank you for any advice.
AI: You use an oscilloscope, with inputs for both the input and output of the block you need the phase response for to measure the time difference between peaks of the input and peaks of the output for input sine waves of various frequencies. Then, multiply time delay divided by the period by 360 deg/cylcle to get the phase in degrees.
Many scopes, even inexpensive scopes, let you use averaging to get better measurements in the presence of noisy inputs and outputs. You can also adjust scope sensitivity and the time base to let you capture the most accuracy in your measurement. For that matter, many scopes these days have a phase measurement option in their measurement menu.
|
H: How can I measure the current drawn by a soil hygrometer?
I have one of these ubiquitous soil moisture sensors:
The only information I can find is that it operates from 3.3V to 5V, uses an LM393 to trigger a digital output and also exposes an analogue reading.
I'm intending to hook it up to a low power PIC (actually a PIC16LF1554 ) and want everything to remain sleeping until I periodically read the sensor from the analogue output. So, I'll power the sensor, take a reading through ADC and the power it off again.
Trying to keep everything as small as possible, with minimum components, I'm considering powering the sensor from a spare PIC pin, which I know will source/sink 25mA maximum. So I'll just drive the pin high to power up, take a reading then drive the pin low again.
How can I find out what current the sensor assembly will draw so I don't risk overloading the PIC pin? If I hook my (cheap) DMM in series with the sensor when powered from 3.3V standalone, the power LED on the sensor doesn't light up and the meter shows zeroes on the lowest range (2000uA). So I can't tell what it draws when it's working normally.
Do I need a better DMM or is there some other way I can see if I need to power this via a transistor (or some other way)?
AI: The best way to know what the current draw can be is to consult the datasheet. If that is not available, and this is for production, go get something that does have a datasheet.
You can also just measure the current. However, keep in mind that only tells you what that unit draws on that day at that temperature.
However, looking at the bigger picture, what's the point of embedding someone else's unit into yours? The circuit for a moisture meter is very simple, and is probably easier to incorporate into your own product than to try to embed someone else's. Also, there are a lot of bad ways to try to measure resistance between electrodes. Something that doesn't come with a datasheet doesn't inspire confidence that someone that actually knew what they were doing designed the unit. Since it appears to be just a simple analog circuit, it is quite unlikely to be using a good technique.
The best method pretty much requires a microcontroller doing a 4-phase measurement. I discuss it in the context of detecting water, but the principle is the same. See https://electronics.stackexchange.com/a/33938/4512, https://electronics.stackexchange.com/a/103330/4512, and https://electronics.stackexchange.com/a/28485/4512.
|
H: Is hardware flow control necessary?
I want to communicate using a serial cable to a device that uses RTS/CTS flow control and 115200 baud speed. I want to use this Sparkfun TTL to rs232 adapter, but it has no RTS or CTS pins. What can I do to get hardware flow control or is there a software solution?
AI: The software solution is called XON/XOFF flow control. It basically consists of sending ^S/DC3 (0x13) to suspend transmission and ^Q/DC1 (0x11) to resume it. Naturally this requires the other side to support it, so if only hardware flow control is supported then it won't be a viable solution.
|
H: How does Ohm's law extend to Z if Z is complex?
We know that impedance, Z, is complex.
I'm told that $$V = IZ$$ But that rearranges to $$Z = \frac{V}I$$
But both voltage and current are real numbers, so how can a real number divided by another real number give a complex number? What have I done wrong?
AI: Complex \$Z\$ has a meaning only for alternating currents/voltages, so your assumption that \$V\$ and \$I\$ are real is wrong. They can be represented as complex as well. Or you can work with effective values, which are real, but then you will have to take an absolute value of \$Z\$ as well.
UPD: More about representing electrical quantities in a complex for can be found in this nice lecture
|
H: Using infrared temperature sensor to measure water surface temperature
Not sure I choose the right place to ask the question, but... Can I use infrared temperature sensor (such as TS118-3) to measure water surface temperature? I'm afraid some effects as reflection of infrared waves from water surface and blinking because of the water ripple will make its impossible.
AI: Yes, you can measure the water temperature with a non-contact sensor. With some care.
The emissivity of water is about 0.98, which is perfect for these sensors. Reflections from the background will not confuse the sensor. The same applies to glass, it's also just a blackbody at thermal IR wavelengths. Any metal surface has low emissivity and will reflect the background, rather than showing its own temperature.
You must still take care of air currents. Particularly if the water is close to boiling, you will have some challenges keeping the steam away from the sensor, while not blowing air over the sensor.
Also, you must consider the fraction of the cone of sensitivity that the water covers. The sensor you mention has no lens, so it's probably sensitive over more than a 90 degree angle. You probably need to do some calibration to take into account the cold surroundings of the water.
Finally, consider using an all-in-one device like the Texas Instruments TMP006, which does all the hard analogue stuff for you, and presents itself as an I2C device.
|
H: Rotary Switch terminology
I am buying a rotary switch, and I have found these terms:
2 x Off/(On)
and
2 x Off/On
Could you please explain me the difference?
edit:
example of a switch with Off/(On) specification
(originally from this page)
AI: Usually positions in parenthesis are momentary positions.
So a switch that is Off/(On) is a single throw switch that is momentary switch. It will not stay in the on position, but will spring back to the off position. A switch that is Off/On will stay in either position.
|
H: Why don't brushless motors "short"?
You always learn in school that you should never short terminals of a battery because the wires overheat from the large current.
If you look at a brushless motor, you'll see that its nothing but coils of wire. So why don't these motors "short" the power leads? How is this any different than shorting terminals of a battery? How is the current being regulated when the motor is operating?
AI: Those wires form coils, so are long. Every bit of wire has some resistance, and all those bits of wire end to end result in a significant enough resistance to not look like a "short".
These wires shorted across a voltage source is exactly where the stall current of the motor comes from. It is simply the voltage applied to the coils divided by their resistance.
When the motor is running, then another effect is present. The motor actually acts like a generator so that spinning in the forward direction cause a voltage to be generated across the coils. This voltage opposes that applied by the external power source. The current thru the motor is therefore the power voltage minus this reverse EMF produced by the motor acting as a generator, and that result divided by the coil resistance. The faster the motor spins, the less current, because a higher back EMF is subtracted from the driving voltage.
This back EMF effect also limits the top speed of the motor. At some speed, the back EMF generated internally cancels the applied voltage, and nothing is left driving the motor. Of course it wouldn't spin at that speed since nothing is driving it anymore, but it works at a little lower speed if nothing is loading the motor.
|
H: How to analyze analog circuits with transistors [is this circuit immediately analyzable to anyone]?
I'm not experience in making electronics, but I do have a working knowledge of basic circuit laws and such, but we never covered something with transistors such as this BFO metal detector circuit I've been making as a weekend project:
This metal detector is relatively weak and I wanted to improve it a little, but I can't get a grasp on what this circuit in particular is doing. I see 2 oscillators on the left, but they both have transistors in them, and I'm not sure what that does to them. I put them in a simulator and they didn't do anything.
I understand the way this works is supposed to be based on heterodyning. So I'd like to make my reference oscillator, a detector oscillator, then multiply the signal somehow and feed it to my speaker. The speaker will only be able to pick up the lower frequency, difference signal, but that's the one I want. I have no idea how this circuit is doing that.
If this circuit is easy for you to figure out, is there a resource you could direct me to so I can learn how it works as well?
AI: Yes, the left two transtors each form a oscillator with their particular coil. The two signals are then added. This will cause the amplitude of the signal to go up and down at the beat frequency, which is the frequency difference between the two oscillators.
The oscillator signal with amplitude variations of the beat frequency is "detected" by the circuit around Q3. This makes the signal that is roughly the amplitude envelope of the beating oscillator signal. This is then gained up, rather crudely, by the remaining transistors to eventually drive the speaker.
The system functions by the metal you are trying to detect changing the inductance of one of the coils but not the other. The absolute frequency change is small, but it is made much more obvious by causing a beat frequency between the disturbed coil and the reference coil.
|
H: If I were to discharge, recharge, discharge, recharge,..., an electronic equipment, would there be any negative effects?
Would there be any damage to a laptop or another electronic equipment if I use the internal battery supply till it has been completely depleted, then I recharge it to maximum, then wait for it to become completely depleted again, and then repeat.
If so, what would be some of the long term effects?
AI: Batteries have a limited lifetime (including sitting on a shelf) due to ageing and also a limited number of charge/discharge cycles, which depends on the average depth of discharge (at least for lithium ones). Example:
Lithium batteries don't suffer memory effects (forcing you to fully discharge regularly) so it's actually preferrable to discharge it partially like the graph suggests.
I can only tell about users concerns though, I know how batteries work but not enough to say why exactly this happens (that's when experts on the topic come in).
Anyway, the answer to your question is: your battery won't last long. The undervoltage protection circuit will cut off each discharge before it instantly kills the battery, but it will be dead in very little time regardless (800 cycles judging from the graph for that particular battery, or about 2 years if fully depleted and recharged every day).
|
H: How does a laptop maintain battery at full?
I know how a laptop battery becomes charged, however, I am not so sure how the power is maintained when when the battery has been charged to the maximum.
I am unsure whether the laptop would switch to take power solely from the power supply or would the battery be replenished as soon as it is used up.
Can someone who is knowledgeably of how this works explain to me how exactly does a laptop maintain power once the battery has been fully charged?
Is this true for ANY device?
AI: That basically depends on the manufacturer.
I've had one laptop that would indeed fully charge the battery, then switch to battery power until it dropped to 95%, then restart charging. The battery died within six months, and I haven't bought from that vendor since.
Typically, the battery is disconnected when fully charged, and then monitored for self-discharge, and after dropping to a certain level, it is again fully charged. This level is chosen as a trade-off between limiting the amount of charging cycles and risking a less than full charge. Again, different vendors choose different values here, between 98% and 85%.
As an extension, some controllers also differentiate between continuous power with the battery level dropping below a threshold, and being plugged in while the battery is below the threshold (many devices are unplugged for short times while moving to another room), look at current power usage (if the computer is turned off, the supply may be unplugged soon), number of batteries installed, past charging behavior, ...
|
H: How does this battery simulator work?
I'm trying to understand how the battery simulator circuit described in the Linear Technologies Application Note 58 works. Its schematic diagram from Appendix B, page 35 is below.
The application note starts the circuit description (about its charging mode - which is what I'm interested in) with this:
In charge mode, current is forced through the battery input terminals.
From this I assume that the battery charger behaves like a constant current source pushing, say, 250mA (C/10 for my cells) into the (+) battery terminal on the upper-left corner of the schematic.
The note then goes on:
The low voltage that develops across R8 is amplified by U1 and causes Q1 to shunt the charge current while maintaining the power supply PS1 voltage. U2, L1, CR1, C3 and C4 produce housekeeping 12V that is required to operate U1 and drive Q1. The power supply range is 1.5V to 15V. Q1 requires a heat sink. R1 is used for measuring the charge current. R10 and C5 simulate the AC characteristics of the battery.
That's what's puzzling me: how does a voltage develop across R8?
According to Ohm's law, for a voltage to develop across R8, there must be a current flowing through it, which I assume is the one pushed by the charger being tested (the 250mA). But as far as I can tell, there is no path for that current to go through R8 as -
Q1 is still off;
That current can't go against PS1, can it?
The forced current of 250mA would generate 50V on R9 while the power supply PS1 would be set to, say, 2.5V, simulating two AA NiMH being charged in series.
What am I missing?
AI: there is no path for that current to go through R8
There is a path through R9
What am I missing?
The negative feedback. Only a small amount of current from right to left through R8 will turn on the op-amp and MOSFET which will sink almost all of the current from the charger leaving just enough current through R8 to generate the appropriate control voltage. This is how feedback loops work.
The forced current of 250mA would generate 50V on R9
This can't happen assuming the op-amp / MOSFET circuit is operating correctly. Look at this way, if some small charge current is through R8 (right to left) there must be some much larger current through the MOSFET. Thus, it can't be the case that all of the charge current is through R8 (and R9).
|
H: NE555 Timer Problem
I need help using an NE555 timer to flash an LED.
Here is my current set up:
All resistors are 220 Ohms and LED is link
The capacitor is 10uf and same result happens with 1uf. I am using a 9V battery.
Description: The LED Blinks on when given power and stays off indefinitely immediately after turning on.
AI: If anyone else has trouble, watch this video and make sure your capacitor is big enough since the timer works based off of how full it is.
|
H: Using a TFT display that has no controller
I have been using LCD screens (with the controllers) for my projects for a long time. Now I thought of learning how these screens actually work, the part before they go to the controllers.
Well the idea came to my mind when I managed to scavenge an old working screen out of my old tablet. It has 60 pins and I managed to find a datasheet for it. The model number is: AU OPTRONICS A080SN03 (datasheet here).
What does each pin of a 60 pin TFT display do? And how can I use it?
AI: The datasheet seems pretty thorough, and the core functionality looks pretty straightforward. Once everything is set up, you clock in RGB values one pixel at a time. First you go across a row, then down a column.The relevant control signals are:
DR[7:0] - 8-bit red value for the current pixel
DG[7:0] - 8-bit green value for the current pixel
DB[7:0] - 8-bit blue value for the current pixel
DCLK - When this goes high, the pixel data is latched. When it goes low, the LCD switches to the next horizontal pixel
DE - When this is high, pixel data can be latched. I think when it goes low the LCD switches to the next row, but I'm not sure whether it's independent of DCLK.
U/D - Selects whether to go up or down a row when DE toggles
R/L - Selects whether to go left or right a pixel when DCLK toggles
So the basic flow will be (from the start of the first row)
Initial conditions: DE=1, DCLK=0
Step 1: Set the RGB values for the pixel via DR, DG, and DB.
Step 2: Drive DCLK high to latch the RGB values.
Step 3: Drive DCLK low to select the next pixel in the row.
Step 4: Repeat steps 1-3 800 times total to set every pixel in the row.
Step 5: Drive DE low to select the next row.
Step 6: Drive DE high to enable pixel writes.
Step 7: Repeat steps 1-6 600 times total to cover every row.
There are lots of timing constraints on these steps. Section 5 has the specs for those.
There are some other control signals, voltage references, and a serial interface. I'm not sure what the serial interface is for, but I didn't look very thoroughly. None of them looks terribly complicated.
All that being said, using this LCD will be very difficult. This is a complex mixed-signal system. Plus, as described in section 4a, you need six separate voltage supplies. There are some example schematics for switching regulators in section G, but each of those is a project in itself.
Studying the datasheet could be very educational, but I don't recommend this as a project for a beginner.
|
H: Power supply draft advice
I've attained most of the material I need to build my first power supply. The specs are mostly dictated by the parts I have. My intention is a dual isolated variable output power supply that I can connect in series to give negative voltages.
Below is the "final" draft of the circuit that I'll be implementing. I'll add in fancy pass transistors and capacitor discharge transistors and other things once I completely understand how they work.
I've set up a basic form of this with a single output and no decoupling capacitors or protection diodes on a breadboard and it worked well.
Clarification: I pulled a single bridge rectifier package out of an old TV which is why I have both a packaged bridge and a diode bridge on the schematic. Other than that and the transformers, the circuits are identical.
EDIT: Fixed schematic.
So here are some things I would like to ask:
I'd like to solder the primary of the 500mA transformer to the primary lugs of the larger transformer so I don't have to branch the mains wire to it separately, is this fine?
Is it fine if I stack the two transformers on top of each other?
If I had both outputs connected in series, would I be limited to 1A due to the second transformer only providing 500mA? I don't need that much current but I'm just curious if I'd run into trouble since the transformers are unbalanced. I have a 9V 15VA transformer as well, would that be better matched?
What amperage of fuse should I use? I'm worried about magic smoke....
How damaging would a shorted/max current output be on this circuit?
Any general advice/improvements?
Thanks for your assistance.
AI: Resistor arrangement on LM317's does NOT adjust voltages as shown.
Refer to data sheet (carefully) - you need a variable resistor to local ground.
You can only use a second positive supply to form a negative one in the way that you are proposing if the "energy sources" are isolated relative to each other. eg two unconnected windings on the same transformer. The input of the "-ve" supply will be above ground voltage during operation.
To use a single transformer winding or non isolated sources what you want is to use an eg LM337 which is a less well known negative regulator equivalent to the LM317.
Here is a dual +ve & -ve supply circuit diagram using an LM317 and LM337. They use centre tapped transformer and a single bridge rectifier - but you could use the 2 separate windings as you propose and join them appropriately.
LM337 data sheet here
Add a small "spreading" resistor between rectifier output and 10 mF filter cap to widen the conduction angle. This reduces peak currents substantially and reduces RF noise generated by diodes and gives diodes a far less-hard time on conduction peaks.
I'd like to solder the smaller 500mA transformer onto the lugs of the larger transformer, is this safe/do I need to care about polarity?
Slightly unclear what is intended. Mechanically not wise.
Electrically - use wires.
Is it fine if I stack the two transformers on top of each other?
Probably. Minimal flux interactions with closed cores. Cooling may suffer slightly but easily judged and varies case by case.
If I had both outputs connected in series, would I be limited to 1A due to the second transformer only providing 500mA?
When done "properly" each supply can return its own Imax to ground. A current that starts at V+ and ends at V- will be limited by the smallest current capability.
I don't need that much current but I'm just curious if I'd run into trouble since the transformers are unbalanced. I have a 9V 15VA transformer as well, would that be better matched?
As above. As specified you can have 1A V+ to ground, 500 mA V- to ground or 500 mA V+ to V-.
What amperage of fuse should I use? I'm worried about magic smoke....
LM317 is self protecting within limits.
1A fuse fast blow blows at about 2A in moderate time.
How damaging would a shorted/max current output be on this circuit?
As above.
|
H: How long/big is an I2C acknowledge?
I'd like to know, how long/big should be an ACK from a slave to its master? I am communicating with a module using the I2C protocol. But the large blank space in the middle, is this an ACK from the slave to the master (µc) as expected?
Whole view
Close-up left
Close-up middle
Close-up right
What I am doing in software as pseudo-code is:
setSlaveAddress(0x48)
sendData(0x55);
AI: The slave device you are communicating with is stretching the clock, which is a feature of I2C. Not all devices implement clock stretching. Any device on the bus can stretch the clock, but generally clock stretching is used by slaves to throttle incoming data.
After an ACK bit, a slave might hold the clock low to indicate that it is not ready to receive more data. Because SDA and SCL are open drain, low logic is created by pulling the line low and high logic is created by not driving the line (the lines must have pull-up resistors). Therefore, the slave driving SCL low will take precedence over the master trying to let the line float high. The slave indicates that it is ready to take more data when it finally releases SCL.
|
H: Why is earth used for ground? Literally earth?
I never considered earth to be particularly conductive. It's just dirt, after all.
However, I've seen "earth ground" conductive stakes driven into the ground in order for electricity to be grounded, because it will find its way down there.
However, it never made sense to me why earth would even provide such an effect: why electricity would bother to flow to dirt out of all the conductive goodness inside the circuit?
What characteristics of earth/electricity makes the current flow right into the ground?
AI: In a nutshell
Electricity is not supposed to flow through ground stakes in normal conditions. It doesn't mean its resistance is high, it's actually surprisingly small. That branch of the circuit is simply not closed normally.
In details
A ground is a reference point. You could litterally take any net in your circuit which is supposed to stay at a steady voltage and call it ground. After all voltage sources create a difference of potential (called a voltage) between two nets, regardless of what their potentials are - if they're both fixed externally, there will be a conflict and bad things, but if one of them is fixed the other potential will change accordingly. Generally the ground is taken such that we work with positive supplies predominantly, e.g. ground on the - terminal of a rectifier bridge. It doesn't mean all the current flow through that, it's only a reference.
The Earth has mainly a person protection role. No current is supposed to flow in the Earth because the actual supply circuit is isolated from the Earth, however what if this isolation is compromised (wires eaten by rabbits, children shoving their fingers in sockets...)? Everyone is indirectly connected to Earth (no isolation is perfect), which means that that circuit will now be closed and the only thing that will limit the current going through whatever is closing the circuit (e.g. people) is its internal resistance. Depending on the environment, that resistance can be sufficiently low to kill someone; refer to this thread about what voltages are considered safe. To prevent that, every enclosure is connected to Earth (a Earth-R-Earth circuit has a near-0A current), and the electric supply has a residual current device that compares the current going in and out, and cuts off the supply if there is a leak (through Earth).
The Earth is used for an equi[reference]potential supply The electricity provider needs to protect its people too, so the upstream supply is also referenced to Earth. Just like everywhere else. So what happens if the Earth is not a good conductor and its potential is not homogeneous? Users could be in contact with 2 different Earths, which can be a high difference of potential (=voltage). Thankfully, moist in dirt and water patches are good conductors, but above all the equivalent cross section of this fictive conductor is massive. Except during short upsets such as lightning, it has an excellent homogeneity in potential. Why use another conductor for ground which will use more copper and actually be less effective if we can use what's under our feet?
The Earth is also useful as a protection against lightning: lightning is just like any dielectric/isolator breakdown, it occurs where the resistance between the charged cloud and the Earth is minimal (see this amazing GIF). High trees, towers etc., and we can't risk relying on luck alone so highly conductive spikes are used to attract lightning, and the Earth is used to dissipate that energy. Loosely said. Normally lightning has enough current flowing to create through Earth and across human legs a voltage high enough to kill them, so it is spread out more evenly.
As usual, I'll warmly welcome anyone correcting me if not accurate.
|
H: Help Identify Faulty Component - Dishwasher Controller Board
Not sure if this is the right place to post this.
Recently my dishwasher gave away :(. It works fine (goes through wash - rinse cycle)when empty but the moment it is loaded, the display lights go weak and start dying the when it enters the rinse stage.
I was quoted $100 for a replacement board by the repairman. Being semi competent in electronics, I am trying to take a stab at it to see if I can find the faulty component and replace it myself.
Attached are the pictures of the front and the back of the board.
My hunch is the black relay in the bottom left as thats the only mechanical component on the board and upon visual inspection, I don't see any obviously burnt components.
Any help would be appreciated on how can I go about this.
AI: Sorry repair questions are off topic here, but if you suspect that part you should take the numbers off the top of it and see if you can find a replacement online. Just google them one at a time. Short of that you could probably find online or buy the repair manual for that dishwasher. It will have pseudo schematics of that board suitable for a technician to try to debug it. Hopefully you have at least a meter you can use to measure things. Basically you're looking at poking around and trying things until or if you can figure it out.
You might also try ebay for the part.
|
H: Boost converter simulation keeps getting fried
In my project I want to use a couple of old russian VFDs. To drive them, I need 28V from a 12V power supply. The plan is to use a DC-DC boost converter as calculated here and here.
I've ported the designs to Yenka, a circuit simulator, but for some reason the MOSFET and inductor keep burning up because of sudden peeks of current of hundreds of amperes.
That surely can't be caused by the lack of Schottky diodes in the software, a small difference in frequency of switching the diode and the voltage drop can't cause such immense currents - or can it?
AI: The basic topology looks OK, so the problems are most likely due to implementation details. Since you gave little detail, we can only make some guesses:
The FET isn't being driven too slowly. The total transition time should be a small fraction of the pulsing period. For example, if the oscillator is running at 100 kHz, then the pulse period is 10 µs, and the switching transition time should be small compared to that. If the gate voltage takes more than 1 µs (in this example) to get from high to low or low to high, that's not good.
FETs have significant gate capacitance, so it takes a significant pulse of current to switch the gate from one state to the other. The digital output can probably only source or sink a few 10s of mA.
Your are using too high a frequency. This works together with #1. The faster the oscillator, the faster the gate needs to transition to keep the FET fully on or fully off most of the time.
At this low voltage you really should be using a Schottky diode, not the ordinary silicon diode you show. There are two reasons for this. Schottky diodes have a lower forward drop and have very fast reverse recovery time. The lower forward drop helps with efficiency. The fast reverse recovery time is very important since the FET is shorting the output during the recovery time. That really beats on the FET and the diode.
2 mH seems very large. Again, we don't know your switching speed or output current requirement, but such large inductors will have significant series resistance.
The duty cycle is not optimized. For a ideal switch and diode, the forward voltage on the inductor will be 12 V, and the reverse voltage 16 V. The length of the on and off phases should be inversely proportional to those, respectively. Again, let's use 100 kHz switching frequency as example. That gives you 10 µs for the whole period. You want 16 parts of that to be on and 12 parts off, which means 5.7 µs on and 4.3 µs off. Since there will be some inefficiencies and losses, in practise the on time will be a little more relative to the off time than the purely theoretical 16/12 ratio.
|
H: Car radio output voltage
I am working on an amplifier for a car radio for a friend of mine. He is away for few weeks so I can't measure the radio output.
That's why I am asking here, does anyone know, what's the peak-to-peak voltage from car radio, that then goes to the amplifier?
Is it universal, if not, how do these amplifiers usually do their preamplifiying?
EDIT:
Is this +-voltage or only positive?
AI: This depends on whether this "radio" is meant to drive speakers directly, or meant to drive a separate power amp that then drives the speakers.
The typical non-power audio signal for interfacing to other equipment is about 1 V RMS with 600 Ω impedance or less. That is sometimes called line audio or a line level signal. A separate stand-alone power amp would most likely expect such a signal as input.
If this radio drives speakers directly, then it is really a combination of a radio receiver, and power amp all in one. If the receiver and power amp were separate boxes, then you'd expect the signal between them to be line level. However, in this case that is totally internal to the box, so is probably not accessible, and may not even exist in that form internally anywhere. All that comes out are signals intended to be connected directly to the speakers.
Car speakers are usually 4 Ω. From that and knowing the output power you can compute the voltage. For example, let's say the radio can drive each of the speakers with up to 15 W of sustained power.
P = V²/R
When V is the EMF in volts, R the resistance in Ohms, then P is the power in Watts. Working this backwards to solve for V yields V = sqrt(15 W * 4 Ω) = 7.7 V RMS. For sine waves, the peaks are sqrt(2) higher, or 11 V. Peak to peak would therefore be 22 V.
|
H: What makes a precision op amp precise?
I'm making a dummy load which can draw 5V/3A maximum, but can also draw small loads (e.g. to drive an LED array of 20mA).
simulate this circuit – Schematic created using CircuitLab
I have selected a 0.5 Ohm shunt resistor after looking through a bunch of datasheets of mosfets.
The positive input is controlled by a coarse and fine adjustment pot, so it can be varied accordingly until 20mA is seen through a multimeter.
To get 20mA, I would need 10mV at the op amp inputs. From the datasheets I've looked at on precision vs. non-precision, the big difference I see is input offset voltage. In this application, is this parameter critical? I believe it isn't, because I can just adjust the pot, but I want to make sure.
Regular op amps such as an LM324 says it has a minimum common mode input voltage of 0V. Are there any benefits for using a precision op amp here? Will it increase the fine grain adjustment of the output voltage (if my input's resolution is very large)?
AI: The LM324 has a maximum offset voltage of 9mV (worst case, over temperature), according to the datasheet.
With your circuit, with 0V in, you could have a current of 9mV/500m\$\Omega\$/9mV = 18mA below which your pot would not be able to set the current. So it's not a very good design if you need to set it to less than 18mA. It's luck of the draw- the next op-amp (even in the same package) could be 9mV in the opposite polarity, so you'd just move the pot.
Maximum temperature drift of the LM324 is not specified (it's not intended for precision applications, after all), but it might easily be +/-10uV/°C, so if the board changes by (say) 70°C as the MOSFET gets hot, the current will change by 0.7mV or 1.4mA, so you'd have to readjust the pot. Of course the highest power dissipation occurs at high output currents, so the change is relatively small (1.4mA out of 2A is < 0.1%). A 20°C change in ambient temperature means a change of perhaps (no guarantees) of 0.4mA, which is several percent of a 15mA current. If you only care about 5%, and currents above 20mA, probably just okay.
Another difference between a cheap amplifier and a good one is the gain. The LM324 can be as bad as 25,000 gain (and it changes with temperature). A precision op-amp will have a gain in the millions. The difference will show up in how well it compensates for load or line changes (not a big deal in this case).
The bias current of the LM324 can be as bad as 0.5uA (typical 20nA) and it changes with temperature so if you had a high resistance pot, you could see it change with temperature.
The noise of the LM324 is a fairly miserable 35nV/sqrt(Hz), and it has nasty crossover distortion, neither of which affects you much in this case.
A couple of things (other than being extremely cheap) that the LM324 has that a typical precision op-amp may not have- wide supply range (especially on the high end), though it may not do so well at very low supply voltages, and it's single supply (input common mode range includes the minus supply) which you absolutely require for your circuit.
So there are plenty of reasons to use a decent op-amp if it's required by the specifications. Or you can get clever with the circuit- increase the sense resistor to get good accuracy for low currents, but to get wide dynamic range, a good amplifier (and other techniques such as good resistors and good layout) may be worth it. For just hacking around and if your current range not huge (minimum to maximum), an LM324 is certainly acceptable. There's no point in using a $5 op-amp if a 1-cent one will do. On the other hand, there are some requirement for which the best ones are not good enough and one has to resort to discretes and other techniques.
By the way, your circuit may not be stable against oscillation. It can be fixed with some passive components, but loading op-amps with the equivalent of a large capacitance in series with a small resistance is inviting trouble.
|
H: What OpAmp configuration is used in this circuit?
I'm studying basic opamp configurations and was trying to determine which configuration was used in this battery simulator. Its schematic is below.
The circuit description, from this application note, is below:
In charge mode, current is forced through the battery input terminals. The low voltage that develops across R8 is amplified by U1 and causes Q1 to shunt the charge current while maintaining the power supply PS1 voltage.
Since the small current through R8 must come from right to left, the voltage on the non-inverting opamp input (+) must be higher than the voltage on the inverting input (-) for the opamp to turn Q1 on. That requires a positive output, leading me to conclude that the opamp is wired in the non-inverting config.
But the non-inverting (+) terminal is also connected to both the PS1 and the voltage regulator LT1073 ground terminals, leading me to believe that the opamp is in the inverting configuration, as in Figure 3 below.
So, in which configuration is this opamp really set up?
Figure 2: Non-inverting OpAmp configuration
Figure 3: Inverting OpAmp configuration
AI: It's not in either of those modes, but it's being used as a differential amplifier:
It doesn't amplify a single signal, either inverting or non-inverting - it amplifies the difference between two signals - in this case the difference in voltage between the two sides of R8.
|
H: What does this PNP transistor + OPAMP circuit does?
I can solve the circuit for given DC input voltage with transistors Ies and alfa values. It amplifies the input signal for like 13 times. (Not linearly). But i couldn't understand the purpose of this circuit. Could you help me with that? Thanks in advance
AI: This is a standard textbook building block- an antilog amplifier. You'd normally have a diode from the emitter to ground to prevent the input from going too far negative and possibly damaging the transistor (by reverse B-E breakdown).
How it works:-
The transistor base-collector voltage is maintained at 0V by the op-amp through virtue of negative feedback.
Collector current is \$i_C = I_S e^{(\frac{V_{BE}}{V_T})}\$, so
\$Vo = -(100K) I_S e^{(\frac{V_{i}}{V_T})}\$
There is a temperature dependency, obviously in the thermal voltage \$V_T = kT/q\$, but also in the saturation current \$I_S\$, so practical antilog circuits use something like a thermistor to compensate for temperature variations.
|
H: pcb etching: chemical method or milling
i'am actually intending to install a small pcb production process (100 pieces per day)
and i am still hesitating on the etching method: chemical or with a milling machine !!
I would prefer a cnc machine but would it produce industrial standard pcb's ??
And do you know some good manufacturers
AI: I would say etching, should be cheaper as well. I'd only consider milling if I wanted to make prototypes in my office right that minute. I remember that the cost of a professional grade pcb milling machine was pretty expensive. I've seen some 3d printer mods that could be cheaper.
If you're worried about handling chemicals you'll have to anyway for the via process unless your boards don't require vias.
You should be able to do more boards per unit of time in an etching tank then a single milling machine I think. That means etching
Also unless this is just for you, I personally wouldn't order a board from someone milling them.
|
H: Best way to get 256 io pins?
I'm connecting to an arduino or raspberry pi and would like to have 256 io pins. I found a 16bit io expander which has 8 unique addresses for 128 but that's the max I can do on i2c. Any other thoughts on how to expand my io ports?
AI: You can put an I2C Multiplexer in there between the Arduino and your devices, then only put one of each address on each downstream bus. A 1-to-2 Multiplexer that I've used with Arduino before is PCA9540BD,118. That will get you to 256 with the hardware and interfaces you are already using with the addition of one more component.
|
H: Boost 5V to 18V: charge pump + LDO or conventional boost converter?
I am trying to build a H-bridge driver using discrete parts. Now I have a problem regarding generating the 10+V gate voltage for the NMOSFETs I used.
The H-bridge and its controls is like this:
simulate this circuit – Schematic created using CircuitLab
My question is, how do I generate this +18V rail? DC-DC Boost converter, or charge pump to ~20V and LDO it down a bit?
Don't worry about my 5V rail, it is capable of 30+ amperes.
AI: A simple boost converter would probably be the easiest for the currents involved.
Those drivers are not going to be the best- fast turn on + slow turn off = lots of shoot-through.
|
H: Do carbon film resistors have any effect on AC?
Carbon film resistors are basically an insulating (probably ceramic) rod covered in a carbon film that is twisted around it, as shown in the picture below:
However, this seems very similar with a coil. I know that on DC, there is no problem, but this shouldn't have some other effects on AC, like a coil? Are there any situations where this kind of resistors are avoided in AC applications?
P.S. I am sorry for using a vague terminology, but I haven't studied AC for a very long time, so I don't completely understand it and the exact effects various components have. But I know that a coil have a different effect on the current than a resistor, and I have some intuition about them.
AI: There is inductance and there is capacitance between the spiral turns and from end-to-end. It's generally not important until you get to VHF frequencies, but of course the effect is relatively larger on higher resistances for the capacitance and the inductance effect is relatively higher on very low resistances.
For example, if you use this calculator, a coil with 8 turns 2mm diameter and 7mm long would have an inductance of 0.04uH, so at 100MHz and Xl/R = 0.1, R < 250\$\Omega\$. Note that if you have a current sense resistor of very low value, even a fraction of a microhenry inductance will start to have a noticeable effect at moderate frequencies. A 10m\$\Omega\$ resistor with 40nH inductance would be affected similarly at only 4kHz.
Parasitic capacitance works similarly from the other end of the frequency scale- a 10M\$\Omega\$ resistor with 0.4 pF of end-to-end capacitance would be be affected similarly at only 4kHz.
|
H: How to Step Down a Low Voltage (≤1.0v) DC Signal by a Factor of 0.9 or 50mV Static?
Alright you e-lectronic masterminds, I've got a predicament that I'm not quite sure how to solve. Basically, I need to take a 20mA DC voltage signal that varies between 0.1v and 1.0v several times a second (~20Hz) and step it down EITHER by a factor of 0.9, or by 50mV static. If you were faced with the same situation, how would you go about doing this? Ideally, I'd like to get my hands on something that's pre-fabbed. If I could keep the soldering iron out of it, that'd be great.. Thanks!
Update: I must be pretty bad at this. There's like 15 different ways to do this.. Thanks for all your help, friends.. The op amp idea got me thinking.. Would this do the trick? Is 10kΩ enough for the input on a LM741?
AI: Something like this would reduce the input voltage by 10% and only load the input with 20K.
simulate this circuit – Schematic created using CircuitLab
I'm not sure where the 20mA you cite comes in- presumably current capability. If it's a current loop signal then you could simply shunt the 50 ohm load resistor with a resistor of value 450 ohms and that would be it (parallel resistance 45 ohms, so voltage drop at 20mA 900mV)
|
H: Why do we need pull-up resistors or tri state pins for AVR external interrupts?
Let's say we need to write a program for ATmega32, that reacts to an external interrupt(INT0) through the pin D2. The interrupt is to happen during the falling edge of the signal.
I noticed it in the books that
DDRA = 0b00000010 is done to "activate the pull-up resistor". Why do we need it? Do we have to do the same if we want rising edge interrupt or a level triggered interrupt?
AI: Activating the pullup resistor is completely independent of detecting the signal. If the pin will be actively driven at all times then there is no need to activate the pullup, and in fact doing so will increase current consumption when the input is pulled low.
On the other hand, many external interrupt sources are actually open drain outputs, which means that they are not capable of actively driving the input high. In this case we must activate the pullup resistor in order to prevent the input from floating.
|
H: -1.2V from LM337L negative regulator?
Here is the datasheet for this device: TI LM337L, at the 3rd page there is an equation for calculating the resistor needed in the voltage divider to set the output.
-Vout = -1.25 * (1 + (R2 / 240))
The 1st thing that I'm wondering about is the value of the lower resistor (240 Ohm), I could not find an explanation for this in the datasheet so I'm assuming that its a minimum value for R1 + R2 to limit the current into the device.
However, solving the equation makes it impossible to reach -1.2V:
-1.2 = -1.25 * (1 + (R2 / R1)) => (R2 / R1) < 0
The value of 1.25V is the typical reference voltage, this value goes from 1.2V to 1.3V. So if I'm understanding this correctly... If I'm lucky to have a device with a low reference voltage of 1.2V and I will short R2 so that R2/240 => 0 I can have the output voltage equal to the reference voltage and still maintain a good working environment for the device?
AI: Yes, you can have R2 = 0. The reason for R1 = 240 \$\Omega\$ if to guarantee an minimum current of nominally 1.25/240 = 5.2mA.
This is typically enough to keep the output voltage from going out of regulation, but worst case with 40V in is 10mA so it's not guaranteed to stay in regulation with no external load.
Chances are the reference voltage on your unit will be rather close to the typical value of 1.250- the outside limits are very rarely approached in a well controlled semiconductor process.
The TI TPS7A3301 series is guaranteed to get down to 1.2V.
|
H: Convert 5V to 3.3V without logic level converter
I've got a 2.2 SPI monitor (QVGA TFT monitor & SD Card), but I am using Arduino UNO board. All pins are 5V. How can I use a resistor to communicate between TFT, SD and UNO?
I have lots of different Ohm resistors, and have 3 Logic Level Converter (non bi-direction).
I try the LLC connect to TFT, but no luck, it just has the backlight ON, nothing I can see.
Thanks TJ, I had wired with this circuit, but the voltage seems not enough 3V. Am I something go wrong?
simulate this circuit – Schematic created using CircuitLab
Also, I have try to change R1 to 330 ohm & R2 to 680 ohm. Seems also cannot convert to 3V / 3.3V
AI: You can use a simple voltage divider to make 3,3V from 5V. 3,3v is high enough for the arduino to detect it as a high input.
Take for r2 470 ohm and for r1 220 ohm.
|
H: smooth ac to measure with dc voltmeter
I bought a couple of those 3 digit 0-30 volt dc voltmeters. Im taking ac 0-24 volts (Lionel train transformer) through a rectifier and measuring it with the led meter. My problem is the meter never settles?
I was thinking a cap across the +- on the rectifier but since I dont have any caps in my part bins to experiment with I have to buy one so I need to know the value in advance.
AI: Assuming the meter has a separate signal input, it's difficult to know without having the input impedance of the meter, but let's assume it's maybe 100K ohms or thereabouts.
D1 and C1 (should be rated for 50V) filter the power to the meter.
D2 (Schottky has a bit less drop, but you could use a regular diode too), C2 and R1
filter the signal to the meter. Again C2 should be rated for 50V (aluminum electrolytic).
A suitable part for the capacitors would be a Nichicon UVZ1H471MHD.
simulate this circuit – Schematic created using CircuitLab
The capacitor value is picked to have a time constant of about 1 second, which means that with 60Hz half-wave input, the ripple will be about +/-30mV or less than 1 count.
Edit: Since you've got it separately powered, you can ignore D1 and C1. See edit below following comment by @EMFields on scaling so higher voltages can be displayed. I've chosen to scale it by a number that approximates RMS reading, taking into account the "approximately 100K" input impedance of the meters you purchased (as shown in your link).
simulate this circuit
If we assume the voltage drop across the Schottky is 250mV then for 24VAC RMS we'd get a reading of 24.2V, and for 10VAC RMS we'd get a reading of 9.9V- pretty good for a simple AC-DC converter.
|
H: Y or Delta connections advantages / disadvantages
I have been studying three-phase systems for the whole course of a subject (on the first year of the university degree). I have finished now, and I know both "Y" (star) or "Delta" (triangle) connections. I have made a lot of computations with them, however I don't know the different applications they have and I would like to know the following in order to increase my knowledge.
I would like to know which one is better (Y or Delta) for different purposes, they must have its advantages and disadvantages, but I have never been told which ones are them. I have tried to do some research on the Internet, but I haven't found in particular a good answer. I have only seen the advantages and disadvantages of Y and Delta motor startup, but I'm thinking more of the "circuit" point of view.
I'm really interested on the subject, but I just seen it from the computational point of view. I would appreciate if someone could explain me a little bit some of the main advantages and disadvantages of using both connections. Thank you.
AI: The two systems have vastly different applications. Yes, there is a lot of crossover between them in some fields, but the two are more suited to certain applications.
Take motors for instance. Delta is far superior for driving motors than star. With delta you can visualize a wave circulating around the triangle, and it's that wave that turns the motor. As the wave moves around the phases it effectively drags the motor around with it. It makes motor design really simple and efficient. Not so with star, where you in essence have to try and combine three single-phase motors in together,
However, when it comes to a situation where you want to spread a load between multiple circuits or devices, and the load on each phase may not be equal (unbalanced system) then a star arrangement has massive advantages. Each branch of the star (phase) is a separate circuit in its own right. The load on each phase is specific to that phase, and they have little influence on each other.
There is also a third arrangement, which is kind of half way between a star and a delta - in this arrangement each delta phase is connected with its own completely separate transformer and there is no common neutral point. This is actually seldom seen much, but I thought I should mention it here anyway. It basically combines both the star arrangement with full isolation, so can have some safety advantages (like having an isolation transformer on a normal single-phase supply) but isn't worth the hassle of a system without a common neutral point.
To clarify what I mean about a wave rotating around a delta, here is a little animation I knocked up:
Note: It's Christmas Day, I'm drunk, and that might all have been complete gibberish for all I know.
|
H: Li-Ion Charging - Constant Voltage Stage
The charging curve for Li-Ion batteries (explained here) has a final constant voltage stage where the charge voltage plateaus and the charge controller slowly shuts off the current until it reaches some threshold (10% in the video's example) and charger considers the battery full.
I can take a battery and measure the voltage. But here the voltage is constant, so how does the charge controller know when to cut off the current?
AI: Current flows into the battery from the charger regardless of the battery voltage measured since this voltage is artificially depressed due to the internal resistance of the battery. The battery voltage therefore is not an indicator of when the battery is fully charged.
The current flowing into the battery is measured via a current shunt resistor; when the current flowing into the battery falls below a programmed limit the charger cuts it off completely and only then is the battery considered fully charged.
|
H: Is a toy's remote controller interchangeable?
This Christmas I'm buying a couple people remote controlled quadcopters. Then it hits me. Is there a possibility that the different remote controllers will crosstalk and wind up controlling a robot that it is not meant to control?
How do electronic engineers ensure that electronic toys will be controlled over via one remote controller and not the other?
Some thoughts:
each RF circuit is only tuned to listen to a single frequency, but how to create so many variants of the circuits?
each quadcopter has a unique identifier and is sent through the wireless controller before each instruction so that the receiver can identify signal through only one set of controller
Any ideas?
AI: The original solution for the 27MHz band was different crystals (matched sender and receiver): http://www.ukrcc.org/27mhz.html
The ISM bands (433 / 915MHz) tend to use digital code switching, your option 2. The number of codes is usually small and may be selectable by a switch somewhere. This makes it straightforward to buy replacement remotes.
The 2.4GHz band is used by all kinds of protocols. Some of the pricier quadcopters define their own WiFi hotspot for control and telemetry; WiFi has both 13 different sub-bands and a coding system (SSID).
|
H: Why do people state their circuit uses transistors?
I'm pretty new to circuits, and still don't know exactly how transistors work, but when I search up a tutorial for circuits that include Transistors, they all have in the title, "One Transistor This", "One Transistor That".
Is there any specific reasoning behind that?
What is so special about Transistors?
Why not include "2 Capacitors"?
Example, I searched up "One Transistor FM Receiver", I know that's a little cheaty, but WHY do all these video's have it in the titles?
AI: Once upon a time, transistors were likely to be the most expensive part in a simple amplifier or oscillator design. Even today, a discrete transistor is likely to be 2 to 10 times as expensive as a discrete resistor or capacitor.
So a "one transistor" circuit was roughly half as expensive to build as a "two transistor" circuit. And cost was an important reason to prefer circuits with fewer transistors.
|
H: ICSP or Pre-Programmed Microcontroller
I've designed a PCB for small scale production; currently I'm hand soldering them in batches of 20 at a time. This PCB uses an ATMEGA168 microcontroller which I program before soldering to the board - this method works well for me at the moment.
I'm looking at having a batch of 200 boards manufactured and assembled for me as the product I've designed seems to be quite popular. Would it be better practice to either include an ICSP header onto the board or keep on pre-programming the ICs myself? Re-programming of the board after manufacture or assembly isn't an intended or required feature.
I'm unsure of how this is done in a larger scale production environment? Adding an ICSP header would increase the cost of manufacture slightly so is pre-programming a standard practice?
AI: I would suggest always including a header in that kind of quantity (small). If you get to larger quantities you can consider pads for pogo pins (you'd have to make a fixture to hold the PCB in alignment) or, in your quantities, you could use one of the Tag Connect cables which require only holes and unpopulated pads (they snap onto the board).
As well as allowing you to recover from a need for a programming update, this will allow you to have an assembly house populate the boards before programming, which could improve logistics.
Pre-programming is certainly an option, even in production quantities (you can even have the distributor do it for you), but it seems to be mostly used in the most simple of applications. Usually chips on boards are programmed (or capable of being programmed) post-assembly.
|
H: Single issue and Dual issue architecture
I am studying the PowerPC architecture on the MPC5644B. The documents from Freescale mention it to be a single Issue architecture.
The data sheet states the following,
• e200z0h single issue, 32-bit core Power
Architecture compliant CPU
— Up to 80 MHz
— Variable length encoding (VLE)
— Supports Nexus3+
I did some Google search and read forums, this seems to be some kind of Mixed Harvard architecture, but i am not sure exactly what it is.
If anyone could help , it would be highly apprecieated.
AI: A single issue processor is basically one instruction, one clock cycle.
Every clock cycle one thing happens. In a pipelined processor basically that means the pipeline gets shifted down a notch and a new instruction is read from memory. That of course is at its most efficient. Some instructions may take longer that one clock cycle to execute, so the pipeline would stall. But assuming all the instructions take just one clock cycle to execute, then for every clock tick you read just one instruction from memory.
That, at its most efficient, is called having "A CPI of 1", or "One Clock per Instruction".
That basically covers the majority of normal CPUs.
Now imagine a CPU which can execute two instructions for each clock tick - say one on the rising edge and one on the falling edge of the clock. So for each tick of the clock you're running 2 instructions, fetching 2 instructions from memory, etc. This is called dual issue since it's issuing two instructions per clock tick.
In this situation it's possible to get a CPI of 0.5, or half a clock tick per instruction.
|
H: Basic voltage divider issue in my amateur circuit XBee
Once again I need your help in my project. I have a 4,5v powered circuit. I put a voltage divider with 2 resistors in series to reduce my initial voltage of 4.5 V to 3.4 V (to feed an XBee module).
The issue is that if I do not connect the xbee, the voltage divider reduces the input voltage to the desired voltage(3.4 V), but when I connect the XBee module, the resulting voltage divider voltage drops to a value very low (0.85 V), making the XBee malfunction. Surely it is a basic mistake, I am asking something that is very clear, but I'm newbies.
Basic Circuit:
Vin(4.5 V)-----R2(10 kΩ)---------R1(33 kΩ)--------GROUND
|
|
Vout desired (3.4 V)
|
|
XBee
AI: The basic mistake you are making is that you cannot use a voltage divider to power a circuit.
Let's do the maths.
For the sake of the maths I'll say the XBee needs 100mA to run. I don't know what the actual value is, but that seems a reasonable value to me for an RF transceiver.
When unloaded you have a simple divider with current flowing through two resistors. Let's calculate that current:
$$
R_T = 10,000 + 33,000 = 43,000\Omega
$$
The current through that at 4.5V is
$$$
I=\frac{V}{R} = \frac{4.5}{43000} = ~105 \mu A
$$.
Ok, so with that current through 10KΩ it drops
$$
V=R \times I = 10000 \times 0.000105 = 1.05V
$$
so the output voltage would be \$4.5-1.05=3.45V\$
All well and good.
Now let's add the XBee. That's going to draw 100mA through the output. So we have 105µA plus 100mA, or 100.105mA flowing through the 10KΩ resistor now.
$$
V=R \times I = 10000 \times 0.100105 = 1001.05V
$$
A thousand volts dropped over the 10KΩ resistor?! But, (you say), That can't be! There's only 4.5V available in the first place!
Quite right. So the entire (or certainly most of) the 4.5V gets dropped across that 10KΩ resistor leaving nothing left for the XBee to run on.
So if you want to draw more current than the voltage divider itself consumes, you're on to a loser right away. A divider like this is only good for small signals where you will be drawing minuscule amounts of current. Ideally the resistance of what you're connecting it to should be no less than 10 times that of the lower (33KΩ) resistor in the divider. Perfect for connecting to input pins on microcontrollers, ADC inputs, op-amp inputs, etc, where the input resistance (impedance) is typically in the hundreds of thousands, or even millions, of ohms. Not so good for powering something.
So what do you do then?
Well, the answer is very simple:
Use a low-dropout linear voltage regulator. They're ten-a-penny and usually just need a couple of capacitors with them. Very simple to use.
|
H: LED as Zener: is this a good way to lower the BOM cost?
I had an old AM radio board designed to operate with a 3V CR2032 coin cell. I want to revitalize this old junk with as few parts as possible and convert it into a USB-powered AM radio with a built-in speaker. Parts I have for this project include three caps, two resistors, a blue LED, an 1N4728 3.3V Zener, a speaker and an LM386 audio power amplifier.
simulate this circuit – Schematic created using CircuitLab
My question is that can I get away without the Zener and use the blue power indicator LED (my bach have a forward voltage of about 3V) instead like this:
simulate this circuit
AI: Yes, however you should bypass the LED with a capacitor such as 100uF electrolytic and you should expect the LED to vary in brightness a bit with the sound output.
You may not be able to get full sound power output while keeping the LED within maximum current specifications.
You could put a diode (eg. 1N4148) in series and use an emitter follower (eg. SS8050) to avoid both problems.
Or use a TL431 which costs less than CNY 50 per K in Shenzhen.
|
H: Thermal grease / adhesive: Higher Conductivity & Lower Impedance or vice-versa?
I have been reading a fair bit about Thermal grease & adhesive comparisons.
There is a lot of information out there - even including tests made with tooth paste(!), which is mind blowing for me.
I have access to a limited variety of thermal grease & adhesive products, however.
And I would like to understand what to look for when buying these products.
My question:
Given two similarly priced thermal grease, paste, or adhesives:
Do I go for the one with higher conductivity level AND lower impedance, or vice-versa (ie., lower the conductivity & higher the impedance), or other?
AI: Higher thermal conductivity (lower thermal resistance/impedance) means less temperature drop, all other things being equal. That is what you want as it will minimize the heating of the semiconductor die by minimizing the temperature difference between the die and ambient.
The purpose of thermal grease is to fill the small (air) gaps between surfaces that are not perfectly smooth. If the surfaces were ground smooth and flat to a mirror finish, thermal grease would be of limited value as it would space the surfaces apart. Very smooth metal blocks such as gauge blocks used by machinists and in metrology labs can be stuck together simply by twisting (wringing) them together, and the gap is negligible.
I have doubts about materials that may contain solid particles such as toothpaste- even if the thermal conductivity seems acceptable, the minimum layer thickness may be too high. If they're water-based, the water will dry out leaving air gaps. Grease has high viscosity but can be squeezed quite thin, and a good grease will not dry out.
|
H: Transistors to switch 12V 2A
I need to switch some power to a device, so I was wondering, if this will work OK, or do I need to add some other stuff to protect it, or make it work correctly.
I ran some simulations, on CircuitLab, and according to them this should work, but I would like, the opinion of other professionals.
VCC = 12V (2A)
YELLOW_IN, WHITE_IN = 0-5V, 40mA (PWM from Arduino)
YELLOW, WHITE = VCC
I should also probably put a resistor between the Collector of Q1 and Base of T1.
This schematic is pretty straight forward, when you put 5V to YELLOW_IN it will act as a switch and then the YELLOW will be connected to VCC which is 12V.
Do you see any problems with this schematic or design?
As for this schematic, it acts the same as the other one, except for one more case, I need to be able to supply ~0.4V to WHITE when WHITE_IN is 0.4V, and when WHITE_IN is 5V, it will bridge VCC with WHITE.
But in that case what happens with the voltage/current coming from WHITE_IN.
Do I need to build that part of the circuit differently ?
Edit 1: I've modified the schematics according to the suggestions, is this sufficient now.
AI: Your circuit looks OK, but it won't switch 2 Amps.
At 2A the TIP42 has typical DC current gain <=50 at 2A, so you need to feed at least 40mA into its Base. In your circuit Q3 cannot deliver this much, because R11 is limiting its Emitter current to about 9mA.
You should reduce the value of R11 to 100 Ohms or less. This will also increase Q3's Base current and R9 could drop several volts, so you should reduce the value of R9 to about 1k, and increase the value of C3 to about 10uF.
When switched on the 2N3904 will dissipate about 0.3W (7V x 0.04A) so it could get quite hot. You can reduce this heating by putting a resistor between the Collector of Q3 and Base of T3. A 100 Ohm resistor would drop 4V at 40mA, taking half the heat away from Q3.
|
H: Output of Compact Fluorescent light (CFB)
I have few unworking Compact Fluorescent Lights and i was wondering what is the output of the circuit?
AI: The output is a high frequency sine wave. The voltage depends on the type of the tube. Typical voltages for CFL are 60V, but can vary vastly.
The oscillation is performed by folding back the current through the tube to the NPN transistors via the two secondary coils of T1. They are antiparallel, so only one of the BJTs can be operated at one time. L2 and C4 form the series resonator which provides a stable oscillation frequency.
During ignition R5 is conductive and closes the circuit via the (apparent) auxiliary electrodes. (Normally I expect a heating element in the bulb, but it is not indicated by any means).
The circuit elements on the lower left (Diac and 1n4007) may be for ignition phase, too, but I'm not sure yet.
|
H: Can varnish affect the performance of the air gap in an SMPS transformer?
In the past I have built several switched-mode supplies using off-the-shelf transformers just using the datasheet properties without having to worry too much about their construction, however I am working on a flyback design at the moment for which no readily available product will do, so I decided to wind my own.
I have some ungapped ferrite cores which I need to shave down to create the gap I require, I am quite comfortable calculating how big this has to be, however one little detail caught my eye - I am wondering whether the introduction of varnish will affect the behaviour of the air gap (assuming that varnish can get into the gap if I dip the whole thing in it). I have not seen any mentions of this anywhere, I am wondering whether it's because it does not matter or because I'm looking in the wrong places. I am not an expert in magnetics so I don't want to rely on my vague intuition, however I have the following equation for the flux \$\phi\$ in a gapped transformer core:
$$\phi=\frac{ANi}{l_c/\mu_c+l_g/\mu_g}$$
Here \$N\$ is the number of turns, \$i\$ is the current through each turn, \$l_c\$ and \$l_g\$ are the effective path lengths in the core material and gap respectively, \$\mu_c\$ and \$\mu_g\$ are the permeabilities of the core material and gap respectively (the latter being practically \$\mu_0\$ if it's air), and the effective cross sectional area \$A\$ is assumed constant for both the core material and the gap (neglecting fringing). Of course all of this assumes the core is not saturated.
I figured I can rearrange the equation to see the effect of the gap more directly:
$$\phi=\frac{ANi\mu_c/l_c}{1+l_g/l_c\cdot\mu_c/\mu_g}$$
The ratio of the gap length to the effective length in the core \$l_g/l_c\$ is somewhere in the order of \$10^{-2}\$, and (assuming the gap permeability is in the order of \$\mu_0\$) the ratio \$\mu_c/\mu_g\$ will be somewhere in the order of \$10^{3}\$. Hence, the combined term \$l_g/l_c\cdot\mu_c/\mu_g>>1\$, so - just for the purposes of a crude simplification - the equation can be approximated as:
$$\phi\approx\frac{ANi\mu_g}{l_g}$$
This shows that for the given approximate range of gap sizes and magentic properties, the magnetic circuit behaves (almost) as if it consisted only of the air gap, but more relevant to the current topic, that the flux is close to being proportional to \$\mu_g\$. I suppose this makes sense as in this scenario the core acts basically as a magnetic short circuit - but, the question this raises is that if the gap is calculated assuming that it contains air with a permeability very close to \$\mu_0\$, what happens when I dip the transformer in insulating varnish? If the varnish gets into the gap, does it change the effective permeability in the gap? Magnetic permeability does not seem to be a commonly listed property in insulating varnish spec sheets, and I don't know enough about magnetics to comfortably assume that the permeability of varnish would be close to \$\mu_0\$. Is this a valid assumption, or is there anything missing here? If it is valid, is there perhaps anything funny that happens at higher frequencies that would affect these effective material properties?
Many thanks!
AI: Rest assured you may safely assume that the magnetic permeability of varnish is close enough to that of air or vacuum (\$\mu_r \approx 1\$), and yes, the gap size will typically dominate many of the characteristics.
See for example this page, especially the non magnetic materials at the end of the table. Unless your varnish is loaded with metal or ferrite powder you should be fine.
|
H: How is U1 powered in this schematic
This is taken from an app-note on the ONSemi NCP1014. Pin 1 of the NCP1014 is Vcc, and I don't understand how it is powered in this configuration. The datasheet linked above mentioned a dynamic self supply powered by charging a cap on Vcc but I'm really not sure how that works given the polarity of D5. Is current drawn through C4/5 here, charging those and C7 on the Vcc, then C7 discharged to power the device?
The App note also mentioned that D6 is only there for a fail condition and when activated would blow F1.
AI: The datasheet- figure 2 and especially the below description from your linked datasheet are pretty self-explanatory.. ask if there is anything that is not clear.
When the MOSFET is off the current source shown in the above diagram charges C7 through the output caps until it reaches the proper voltage. This does mean that a small load is required or the output voltage could go high out of regulation and either that load or D6 can perform that function.
|
H: SPICE model of DRAM
Is there a SPICE model for DRAM developed? I am interested in any type of model at any level starting from DRAM memory cell and towards to whole memory VLSI circuit. I am seeking for a DRAM operation tutorial that includes examples utilizing SPICE.
AI: Your question is frankly a bit too broad (although you may not realize that yet). But since you're literally asking for any SPICE model for any DRAM circuit, the Micron page for one of their many DRAM chips [MT41J256M16HA-093] has a HSPICE model. HSPICE is a dialect/implementation of SPICE.
|
H: Step size for controlling stepper motor
I am using a PK243 - 03A unipolar stepper motor, its step angle is 1.8 degrees. I am planning to step the motor periodically on it's starting frequency which is coming out to be 1.3kHz, to see how it makes 200 steps per revolution, what I cannot decide is the step size of the pulse. How long should be the pulse high to get 1.8 degrees?
AI: The pulse duration is irrelevant. It's the number of pulses that matter.
In this case, you need 200 step cycles per revolution. You derive the pulse duration from how rapidly you want to step.
I'm assuming you are using a driver IC, which takes step/direction pulses. If you're driving the motor directly, things are more involved.
|
H: Resistance Wire Circuit Safety Around Water
I'm working on a project that requires heating of a copper pipe with water running through it. I'm aware that this is a common issue by itself but as there will be people working around the system and the pipe has condensation running down it I am looking for some safety guidelines to avoid shocking people working in the lab.
I calculated that I'd like to run the wire at around 48 VDC with 5.3Amps resulting in ~260W. I've been reading around the internet that when water is introduced that it substantially increases the danger with regard to fatal human shock (as opposed to dry hands touching lowish voltages).
How can I make the setup safe to work around, specifically are there any circuit design considerations and board assembly tricks (conformal coat to some extent I imagine), or is this more of a mechanical placement/guarding issue?
Issues I'm worried about: accidental contact with wire/pipe, accidental water dripping around control box or down wires into control box, plugging and unplugging heater from wall with moist hands (maybe that's just a common sense issue).
Thanks for the help.
AI: Cheap(ish), safe, off the shelf ..."
You can get a really really really safe result by buying commercially made water heater elements that operate from eg 30VDC and which have eg 3 x 300 Watt "spears" which can be used in series or in parallel or with 1 or 2 only operating. These screw into the standard fitting used by domestic hot water tanks and allow you to isolate the DC from the water. These are commonly used in alternative energy / solar type applications.
Example below is 12V rated. Higher voltage units are available.
Lower wattages can be obtained using PWM and/or lower than rated voltages.
eg the 600W unit below would give about 260 W at 8V.
Power under PWM is essentially proportional to on time %.
A number for sale on ebay - from about $US20
eg $US19.98 buy now Mansfield Missouri.
12 volt 600 watt dc Low Voltage Submersible Water Heater element 4 wind or solar
DIY with Nichrome wire:
48 VDC would be considered ELV (extra low voltage) in most cases and not directly covered by safety regulations that apply to the next level up - LV. 48V can certainly kill you if you let it, but it takes a fair bit of permission on your part.
Reducing that to 24VDC helps appreciable and, if you really cared, it would be very easy to use eg +/- 12V with centre earthed so no exposed voltage is more than 12V above (or "below") local ground.
You can even die using 12VDC but it's extremely hard to do and is "safe" in most cases. Standing in salt water and grabbing 12V wrt the water is not at all recommended. I have a still alive friend who can tell you how unpleasant that can be.
260 W / 12V ~= 22A
you can buy 12V and 15V 250+ Watt power supplies that would do it all at 12V.
Or half that in two halves as above.
Or you could operate a number of lower wattage 12V say Nichrome windings in parallel with wire or eg copper pipe return.
TELL US WHAT YOU WANT AND WE'LL TELL YOU WHAT YOU NEED:
Some more information on whether you really want to heat the tube or the water or .... would help. eg do you have a copper tube and you want hot water to come out and what is heated or how is not directly relevant, or is what gets hot when and where of importance in itself. And knowing how long the tube is, water flow rate etc does not apparently address your core question but may help with answers. Knowing the application in more detail can also help. Also the environment - university research lab / private company prototype / science fair or private project / ... .
|
H: Controlling 2 relays with 1 Open-Collector output
I had a board with open/collector output pins, and i need to control 2 relays syncronioulsy (open and close at the same time) with 1 open collector output. So how can make the connectıon from pin to relays?
AI: It will be necessary to understand the current sinking capability of the open-collector driver on the output pins. You will also need to know the off state voltage that the open-collector driver can withstand without damage. It will be necessary to not exceed either one of these parameters.
Next step is to comprehend the relay coil requirements for the two relays. How much voltage needs to be applied across the relay coil to be able to pull-in the relay and how much current flows through the relay coil when it is energized. If the two relays are the same then they can be wired in parallel. On the other hand if the two relays require different voltages you will require buffering and a separate driver stage for each relay.
It the sum of the current for each of the two relay coils is less than the current sink capability of the open-collector output AND the relay coil voltages are the same and less than the voltage standoff rating of the open-collector output then you can connect them together as follows:
On the other hand if the rating of the output cannot meet the requirements of the relay coils then it will be required to buffer the output. This can be done various ways but here is one concept that can be used:
Component selection values need to be selected according to the current and voltage ratings of the output and the relay coils. Without detailed information being provided in the question selection cannot be specified here.
Note that it would be best to use a catch diode for each relay wired directly across the relay coil terminals.
Also note that if the two relay coils operate off different voltage levels then it would be required to have two separate driver circuits, one for each relay that are wired up as follows:
|
H: EEPROM Reading/Writing without Use of Programmer
I am designing a small password based door lock project which is fully made from logic gates and ICs without the use of any sort of programmable devices(it is a simple university project for the Logic Design Course).
And as assigned to me, I need a non volatile memory for this lock in order to store the password, return its value and rewrite it whenever I want. I first thought of EEPROMs but many told me that an EEPROM should be removed from the circuit in order to be reprogrammed which is not acceptable_the memory of the circuit should stay inside the circuit with respect to my project. In addition I was told that the EEPROM is only reprogrammed through a PROGRAMMER.
Here is my problem:
How I could implement the EEPROM so that I can read/write to it data without removing it? that's to say my own circuit should be able to read and write to the EEPROM without using a PROGRAMMER .
So is this possible? Is there a way for such implementation? Or should I think of another non volatile memory other than the EEPROM to store, read and rewrite the password? If there is another way, please let me now.
Any help is appreciated. I am really a beginner.
AI: What you've been told about EEPROM's is ancient history. Modern EEPROM's are readily programmable in-circuit. You should read some datasheets of "typical" EEPROM IC's in order to understand how to use them; if you have any detailed questions, you can ask them here (in a new question).
The easiest way to find examples of EEPROM IC's is to search the websites of some of the major electronics vendors; the three that spring to my mind are Element14, Digikey and Mouser, but there are others as well.
There are two types, serial and parallel. Serial use 2-4 wires to signal data serially. Parallel use 8-16 wires for data, up to 20-odd wires for address and 3 or so for control.
Within the serial options, there are SPI and I2C. Given that you're using hardwired logic, I'd very much avoid I2C. SPI would be manageable though.
EEPROM datasheets generally give a thorough explanation of the low-level protocol required to read and write them.
My advice off the top of my head is that a something like a 25LC040 SPI EEPROM would be feasible.
|
H: AD620 in-amp reference pin
I have a weighing scale project using a 200kg load cell with a rated output of 1mV/V. I used 5V as excitation voltage and an Ad620 in-amp for my amplifier. I only need atleast the 0-100kg so I only want to amplify the 0 - 2.5mV part of my output which I believe corresponds to the 0 - 100kg load.
On choosing the gain, I have read the AD620 datasheet: http://users.ece.utexas.edu/~valvano/Datasheets/AD620.pdf which says that the output swing at overtemp(worst case) is -Vs + 1.6 to +Vs - 1.5 which is equal to 1.6V to 3.5V at my case because i'm only using 5V single supply. So i have targeted my output on that part and settled with a gain of 381 or Rg = 130 ohms. So using 2.5V as my reference voltage, I will have an expected output swing of 2.5V to 3.4525V which is within the safe range I assumed.
But the problem is, I don't get an output of 2.5V at zero load even my reference voltage is at half the supply or 2.5V. I only get about 2.47V which is only near it by 30mV or something. According to the book I have read and according to this simulation by analog devices: http://designtools.analog.com/dt/inamp/inamp.html?inamp=AD620, the output at no load or zero differential voltage must be (V+ - (V-)) + Vref. So the V+ and V- will be cancelled out leaving the Vref as my output voltage which must be 2.5V at my case.
This is my preliminary circuit:
What could be the problem? Any help would be appreciated. thanks!
AI: In an ideal in amp the equation you have is true. In a real in amp we have to deal with something called offset voltage. This will be the real difference between the inverting and non-inverting inputs, as opposed to the 0 found in an ideal in amp.
Looking at the datasheet we see that it has a worst case value of 125uV. Multiplying this by the gain of 381 gives us about 48mV. Your output is therefore completely within spec.
|
H: Resistor colour code chart
China has recently changed the colour code for resistors.I searched on google but it didn't show the real value of the Chinese 4 band resistor.Please help!!!
AI: More seriously, are you sure you don't have a 5-band resistor (typically used for 1% and better resistors) and you're comparing it to a 4-band code? Or vice-versa?
Color/colour codes are pretty much dead in 2014-5- they're still used on legacy through-hole and MELF resistors and some through-hole and MELF capacitors, and some diodes, but newer (as in the last decade or two) SMT parts tend to be marked either with a numeric code or the newer alphanumeric code, and the smallest parts, unfortunately, are no longer marked at all. Edit: also axial lead inductors still have color codes.
|
H: Why gate - source and not gate - body voltage?
Considering an NMOS transistor with separate body and source terminals, with the following voltages: Vd - 10V; Vg = Vs = 5V and Vb = 0V, why wouldn't it conduct?
Isn't the channel of electrons formed mainly because of the potential difference between the gate and the body?
Is the source voltage disturbing the channel?
AI: The electrons come from the source, which is an N-doped region. This source quotes Chenming Hu's book "Modern Semiconductor Devices for Integrated Circuits":
...there are few electrons in the P-type body, and it can take minutes for thermal generation to generate the necessary electrons to form the inversion layer... The inversion electrons are supplied by the N+ junctions...
I also ran across a (probably illegal) PDF copy of Ali Niknejad's book "Electromagnetics for High-Speed Analog and Digital Communication Circuits". I'm not going to link to it, but here's the relevant quotation from section 2.3 regarding MOS capacitors:
In the above discussion you may have wondered, “Where do the electrons come from
to form the inversion layer?” In the body of the MOS-C structure, electrons are minority
carriers and few and far between. So when inversion occurs, where do we find all the
electrons necessary to invert the surface? Well, there was a subtle assumption that if we
apply a change in gate voltage, we wait long enough for thermal generation to create a
sufficient number of electrons to form the surface layer. We may have to wait a very long
time! In other words, if we apply a fast enough signal to the gate, there isn’t enough time
for the minority carriers to be generated and thus the capacitance remains at the low value
given by depletion.
While the depletion region can respond very quickly to our gate voltage since it is formed
by majority carriers, the minority carrier generation is slow. There is a simple way to solve this problem, as shown in Fig. 2.25, where a n+ grounded contact is placed adjacent to the gate. Normally electrons are prevented from entering the body, like any good pn-junction. But as we raise the surface potential, electrons can easily diffuse into the surface of the structure. Since the energy distribution of electrons in thermal equilibrium is exponential, changing the potential barrier linearly results in an exponential increase in the number of electrons that can cross the n+-surface junction and likewise an exponential increase in surface conductors.
I redrew Figure 2.25 to illustrate what he's talking about. You can see how similar this is to a MOSFET:
In an NMOSFET, when \$V_S > V_B\$, you get a secondary effect called the body effect or the substrate bias effect. This effect acts to increase the threshold voltage. According to Wikipedia, the body is acting more like a second gate.
|
H: Frequency response of 2nd order RC low-pass filter
I am trying to calculate the frequency response of the following circuit:
simulate this circuit – Schematic created using CircuitLab
The two extra GND points are used to show that Vi and Vo are measured in relation to the ground.
Here are 2 methods for this:
Method 1
Method 2
The first method is the one I thought. As you can see, I used the formula of voltage divider two times. But the frequency response I found is different from the one in the 2nd method. At this point, let me clarify that the 2nd method supposes R1=R2=R and C1=C2=C but either way the frequency response is different. Basically, the 2nd method finds the same function as in here: http://sim.okawa-denshi.jp/en/CRCRkeisan.htm
I suppose one of the methods is wrong, but why?
Sorry for not typing the equations but I thought this would be a waste of time. If something is not clear please ask me to clarify it.
AI: Your answer according to "Method 1" would be correct if you defined \$Z_1\$ as
$$Z_1=\frac{1}{s C_1}||(R_2+\frac{1}{s C_2})=\frac{\frac{1}{s C_1}(R_2+\frac{1}{s C_2})}
{\frac{1}{s C_1}+R_2+\frac{1}{s C_2}}=\frac{1+sR_2C_2}{s^2R_2C_1C_2+s(C_1+C_2)}\tag{1}$$
Use Eq. (1) combined with the relation between \$V_0\$ and \$V_1\$ and you will obtain the correct transfer function.
|
H: Power Efficient GSM modules
I am developing a Kit with GSM module which have to receive and send commands. I have gone through many GSM modules(SIM900). All of them consume around 1.5A at burst time. So I am in trouble to develop a power efficient Kit.
My doubt is normally mobile phone battery has the capability of 1000mAh. According to the calculation it has to work around 1 (busy) hour. But normally they stands for 1 day at-least. Are mobile phones uses any low power GSM modules? Is there any specific sleep-wake up mechanism with them?
Your suggestions and helps around this topic is appreciated.
Thank you
AI: Peak current applies to very short pulses. Don't worry about this. In standby mode most GSM modules use much lower current (<50mA).
Long time ago (8 years?) I was using Siemens XT65 GSM/GPRS module and it was pretty energy-efficient (energy consumption was similar/lower than typical GSM phone). There should be no problem to reach 5 day standby on 1000mAh 3.6V battery.
XT65 is very old model now and they probably not sell them anymore, but you can buy modern Siemens modules. They are not so easy to use and relatively expensive but I'm sure that energy efficiency is way better than in low cost modules.
Here is datasheet for XT65 module: link
It says that it uses up to 2A (peak), but this applies to bad signal conditions (when BTS is far).
If you want low cost device - buy some ancient Nokia phone (5110/6110). You can "talk" with them by using UART and AT commands. They can work about 1 week on 3Wh battery (800mAh/3.6V).
|
H: Automatically upload binary file to nucleo after compiling in Keil
I bought Nucleo F411RE and I'm using Keil uVision 5 as IDE for programming it. To download program to Nucleo, you simply copy it in the virtual nucleo drive:
Of course, I'd like to have this task automated. To do so, I tried to use the Keil project configuration:
But that doesn't work:
*** Error: CreateProcess failed, Command: 'copy build\Nucleo_blink_led_NUCLEO_F411RE.bin H:\'
So what should I do? Keil is recommended by Nucleo manufacturer, yet this simple issue seems to have no solution on the internet...
AI: I don't know why this didn't work, but xcopy command works well:
|
H: Why erasing SSDs in increments of entire blocks does not damage adjacent cells?
I know that to avoid corruption and damage, SSDs are only erased in increments of entire blocks. But how does that not damage adjacent cells? Why energy-intensive tunneling inhibition isn't necessary when you're whacking a whole bunch of cells at the same time?
AI: It's not to avoid corruption or damage. Erasing in blocks is part of the definition of flash memory. Flash memory is descended from EEPROMs -- EEPROM + block erase = flash. There are a few reasons why this is good:
The mechanism used for erase (Fowler-Nordheim tunneling) is slow. In NOR flash, it's possible to use much faster mechanisms for programming like hot electron injection. We're talking microseconds vs. hundreds of milliseconds, a factor of ~100,000. In NAND flash, F-N tunneling is used for word programming as well, but at least one operation is faster.
Conceptually, erasing is more likely to be a block operation anyway. This is true for microcontrollers (where "re-flashing" means a full erase/reprogram), as well as SSDs (you can append data to a file one byte at a time, but deleting the file removes all of the data at once). Overwriting only a little data in an existing file is much less common.
Erase uses high voltages, so erasing in blocks requires less die area to implement bit- or word-selective switching. This is also true for programming, but you have to have bit-selective programming, so there's no way around it.
If you post a source, I can comment more on issues of corruption and damage. F-N tunneling does cause long-term oxide damage, which slows down erasing (and programming in NAND flash). It's also possible for a program or erase operation to corrupt ("disturb") nearby bits. Finally, in stacked-gate flash transistors, it's possible for over-erasure to cause corrupt reads, but I don't know how common that is in NAND flash.
EDIT: The Ars Technica article you linked to contains the following paragraph:
While SSDs can read and write to individual pages, they cannot overwrite pages. A freshly erased, blank page of NAND flash has no charges stored in any of its floating gates; it stores all 1s. 1s can be turned into 0s at the page level, but it's a one-way process (turning 0s back into 1s is a potentially dangerous operation because it uses high voltages). It's difficult to confine the effect only to the cells that need to be altered; the high voltages can cause changes to adjacent cells. This can be prevented with tunneling inhibition—you apply a very large amount of voltage to all the surrounding cells so that their electrons don't tunnel away along with the targeted cells—but this results in no small amount of stress on the cells being erased. Consequently, in order to avoid corruption and damage, SSDs are only erased in increments of entire blocks, since energy-intensive tunneling inhibition isn't necessary when you're whacking a whole bunch of cells at the same time. (There's a Mafia joke in here somewhere, I'm sure of it.)
I've only worked on NOR flash, so I can't say for sure whether there's something special about NAND flash that causes erase to be harmful. This presentation from Micron suggests that erase is done in blocks because the erase voltage is applied to the P-well, and having a separate P-well for each transistor string would take up too much space. As you can see in the presentation, inhibition is used for programming, and has the same problems that are mentioned in the Ars Technica article. The erase and program voltages are similar, as one would expect. I suspect the Ars writer got confused about the significance of these points:
Due to the design of the flash array, voltages can only be applied to entire rows (gates), strings (channels), or blocks (P-wells) of transistors.
This includes the high voltages used to induce F-N tunneling for program/erase.
If you don't want to program every bit in the selected row, you need to apply an inhibit voltage to some of them.
One component of the inhibit voltage is a medium-high voltage applied to the gates of every unselected row (wordline).
In strings containing bits that you're programming, this medium-high voltage acts as a weak programming voltage for the unselected bits.
Over time, this weak programming can gradually flip a bit that's supposed to be erased.
If you had selective erase, you'd have to do a similar sort of inhibition to protect the unselected bits.
I doubt that erase inhibition would be any more harmful than program inhibition, and they'd probably balance each other out anyway. But selectable erase with inhibition would take up more die space, and thus drive up the price. Given that flash memory was around long before NAND SSDs, any quality improvement from block erase would be a happy accident, not a modern design choice.
Again, I've only worked on NOR flash, so if there's anyone out there with NAND experience, please feel free to chime in.
|
H: Is there a common name for this [voltage-boosting] circuit topology/idea?
In the OPA454 datasheet I found an interesting circuit idea that I'm not sure what is it usually called (if there is a common name for it).
It involves using two opamps to shift the rails of a 3rd one. Note that unlike a bridged [tied load] configuration this circuit requires a doubling of the voltage supply rails. On the other hand, the load is not floating in this circuit, so you can combine ts idea with the bridge to get 4X the output voltage swing (relative to using a single opamp). I'm omitting the 6-opamp bridged picture here as rather obvious; you can find one in the aforementioned datasheet.
My question is just what's a/the common name (if any) for this circuit idea. If I were to coin a name, "dynamic rails", "dynamic operating point" or something like that seems reasonable to me. (But these names don't get any sensible results back via google search.)
EDIT: I also saw something similar, but cheaper, with two BJTs instead of opamps for the "rail shifters" (A1 and A2) in a 1999 EDN article titled
"Bootstrapping your op amp yields wide voltage swings" written by [then] AD employees Grayson King and Tim Watkins. Using BJTs would introduce some more non-linearities, no doubt. So maybe "bootsrapping" might be the name for this technique... although Rod Eliott's page discussing the issues with this approach never calls it that, so I'm not convinced "bootstrapping" is the name for it... (EDIT3: Well, this was an incorrect reading of the purpose of that circuit; see comment below the question.)
EDIT2: In another article and in AD app note AN-232 (cited in that article), "supply bootstrapping" or "substrate bootstrapping" refers to something similar (altering the rail voltage via "feedback"), but in these articles it is done for a different purpose: a reduction of the input capacitance non-linearity for opamps with FET input stage... So, I'm guessing "bootstrapping" encompasses the idea I've asked about, but can generally mean the use of this supply-voltage-shifting technique for other purposes.
AI: I've seen "bootstrapping" used in a few more books/articles for this technique:
Small Signal Audio Design, 2nd ed., by Douglas Self pp. 136-137; shows both an opamp-based boostrap and a BJT-based one. EDIT to add: As it turns out, there some free excerpts from the first edition of the book published in EE Times; the relevant circuits [aimed at JFET input C/V tweaking] are published in an article called "Op amps in small-signal audio design - Part 2: Distortion in bipolar and JFET input op-amps.
Rail bootstrapping to reduce CM distortion".
Analog-to-Digital Conversion, 2nd ed., by Marcel J.M. Pelgrom, pp. 210-211 uses "boostrapping" for an NMOS transistor operating above the rails. There are more CMOS books that use this term for the same purpose, e.g. Uyemura's CMOS Logic Circuit Design p. 319.
"Advanced techniques tackle advanced op amps' extremely low distortion" by J. Graeme, also reproduced in The EDN Designer's Companion, Here p. 213 the bootstrap is done for measurement purposes.
So I guess that clinches "(rail/[power-]supply) bootstrapping" as a relatively common name for this, even though not every source utters it when discussing such circuits... And the purpose of the rail bootstrap may not always be a voltage swing increase.
If someone finds another (relatively common) term, please contribute another answer.
EDIT: As interesting titbids (found thanks to a rather vague mention on a forum), I found two UK patents on this idea:
"Zero input capacitance amplifier" (1988) by Bernard / ABI Systems ; uses the 3-opamp version. Title is self-explanatory.
"Interface circuits" (1980) by Mohapatra / Sandman. Rather uninformative title, if you ask me; shows several variants, some with two BJTs and some with opamps all-around. The stated goal is to increase input impedance and CMRR.
Neither of these UK patents uses the term "bootstrap" though... as far as I can tell, anyway; old UK patents are not OCR'd, so search doesn't work in those PDFs.
There's also an output voltage boost circuit based on this idea in Linear appnote 67 in the article titled Extending Op Amp Supplies to Get More Output Voltage by Dale Eagar (starts on p. 58 in this long document). This one uses MOSFETs on the opamp rails. But it never uses the term "bootstrap". So yeah, "bootstrap" is a common enough term for this, but not universally used... The preferred term at Linear for this circuit idea is extended supply mode, which (if you google it) is found exclusively in their documents... so this alternative term appears to have less traction across the industry as whole.
|
H: how can i using CPU8085 and PIO8155 crate an alarm circuit and program
example ; we have smoke sensor, speaker and my circuit.When Sensor has been active, speaker to sing it. i'm newbie in this topic. how can i do it like this circuit ?
AI: I'll make an exception to the no-homework rule.
1) Configure the 8155 to have both inputs and outputs.
2) Connect the smoke detector output to an 8155 input.
3) Connect the speaker to an 8155 output.
4) Monitor the input port, and if the smoke detector is active, drive the speaker output with a square wave at some convenient frequency.
That's it. And no, you don't get any details. That's for you to study on. The 8085 and 8155 data sheets have all the information you need. Step 4 is software, so you need to pay attention to the 8085 instruction set, which is part of the 8085 data sheet.
Lots of luck. Work hard.
|
H: Should I ground an aluminum front panel?
I am building a power supply and it will be housed in a plastic case with a fan. I understand the wisdom of grounding a metal case, but in my case (no pun intended) I have an aluminum front panel. Should I ground it? My feeling is yes, but I want to know if this is correct as I haven't seen metal front panels grounded before.
AI: It depends on the type of power supply and how many of layers of insulation you have on high voltage lines.
Is this an AC to DC power supply or do you have mains coming into the box for any reason?
If you have a mains (120 or 220 VAC) live line (or other hazardous voltage - depends on application but generally above 60VDC) in the box and it is theoretically possible for a fault to result in a loose live wire that could reach and come in contact with the aluminum panel, it must be grounded (and with a solid connection that is capable of passing relevant regulatory ground-bond testing requirements).
Without this, a fault could lead to the panel (and therefor human operators) being exposed to live voltage. With it, it will result in a short to earth ground that will blow a fuse / pop a circuit breaker.
|
H: clk prescaler with possible glitches
I‘m new here and have the following problem. I have developed a module in vhdl which scales the frequency of an input clk by the input prescaling value (0-255) s.t. the frequency of the output signal is f_out=f_in/(1+prescale).
My design works fine in simulation but from past uni courses and eyperience I know this is still far from working in hardware.
What I believe might cause a problem is the following: I have an internal signal that switches level when a rising edge for clk_out should occur and one which switches level when a falling edge for clk_out should occur. clk_out is then simply an XOR of both...so behind these two buffered internal signals I have a small combinatorial part that generates clk_out. My design keeps the duty cycle at 50% so a falling edge can be generated at a falling edge of clk_in, thus I cant simply flop clk_out.
So my questions are:
Will the combinatorial part cause problems in an fpga design? Afterall...clk_out will be used as a cloxk signal.
Is there perhaps a better way to approach this? I really only need the rising edge of clk_out so the duty cycle is not too crutial, however how will.I realize the clk_in=clk_out case?
AI: (From a Xilinx experience)
You can place a timing constraint to ensure that the propagation time of the combinatorial parts (around the XOR) are kept shorter than the clock high and low states. Ideally, they should be balanced.
Clock buffers are automatically inferred when a commbinatorial signal is used as a clock input. You can check that by reading the synthesis report.
Some FPGA families have special glitch avoidance clock multiplexers, which could be used to select between the 'direct clock' and the 'clock division' paths : "BUFGMUX"
You can use a clock multiplier PLL to generate a 2X clock and achieve 50% duty cycle whatever the input clock shape is.
If you only need low frequencies (maybe up to 50...100MHz, depends on FPGA family), the combinatorial XOR should work. If you need high frequencies, a PLL, and/or clock multiplexers should be better.
|
H: Confused with mesh analysis
something about mesh analysis confuses me and I can't find an explainaton anywhere, as you can see on this picture (it's in French, but the schema and formulas should be enough to understand)
They bascially explain how mesh analysis works, and that's fine and all, I understand the formulas but then I get this exercise :
And as you can see they just put I2 = -2A. Why do we not care about the 12 and 3 resistances ? The exercise is basically the same as the image shown above and in that one we take both R2 and R3 into account.
Is it because this time we have a current source ? Or because we are on the "edge" on the circuit ?
Thanks for reading and thanks in advance to those who will answer
AI: It's because of the current source and the fact that I2 is the only mesh current in that branch. I2 can't be anything other than -2 amps, so you don't need to write a loop equation for it.
If you swap the current source and the 12 ohm resistor, then you get an extra equation (I2 - I1 = 2 amps) and an extra unknown (the voltage across the current source).
|
H: Amplitude Modulation Frequency
In am receivers, we detect the modulating signal by using envelope detector. So, the diode rectifies the signal, and the detected signal has doubled frequency than the original. Does this mean we will hear distorted sounds, if we connect audio amplifier. Is it possible to decrease the frequency by two?
AI: In AM you have a waveform typically formed like this:
Rectification chops off the negative portion of that signal, thus:
Filtering then removes the high frequency component:
A capacitor then removes the DC offset:
Nowhere in that does the frequency change.
Even if you were to use full-wave rectification, only the frequency of the carrier would change - the modulated signal frequency will be just the same as it was.
In the image you provided in your comments:
The modulated signal is modulated twice - once on the positive axis, and once on the negative axis. This is the same as the top image above. However, the modulating signal has an amplitude that crosses the zero axis, so you actually end up with two signals crossing over each other, like this:
When you then rectify and filter that waveform you get just the positive portions of both waves:
With pure audio modulation (modulating two audio signals together) this can be desirable as it produces very noticeable affects and artefacts (you would never typically demodulate this signal, it would be the finished audio product in its own right). In RF modulation though it's not wanted, so the incoming signal should have an amplitude of no more than 50% of the carrier frequency, and be off-set to half-way up (and down) the carrier wave so the two sides of the wave don't cross over.
|
H: half bridge IGBT not switching high side on
I have designed a PCB with a FAN73933 half bridge driver. It is designed to drive 2 IGBTs (FGH40N60SMD) with +- 170V rails. I have attached a schematic for better understanding. The problem is I can not seem to get the high side to turn on. The system uses a boot strapping method to boost the gate voltage for the high side.
When I scope the boot capacitor I see a steady ~12V, which is my Vcc for testing. My rail voltage is +12 and 0V so all my GND,COM,-V are tied together. I'm driving the system with a simple Arduino for testing, its outputting a 1khz 50% duty wave with ~10% dead time, this was verified on a scope. The low side gate driving wave form is as expected and I believe the low side is turning full on.
The high side seems to never have any voltage on the gate. That being said, as i crank up the frequency above ~20khz I begin to pull a lot of power. I'm using a 50 Ohm test load so the max current draw should be ~ .5A I'm seeing ~6A based on a shunt resistor.
What I don't understand is that there appears to never be any gate voltage so i cant be having a shoot through. Also the controller has shoot through prevention, it will not turn both gates on at once. I'm really confused, all the signals look OK except for the bootstrap and the high side gate.
The boot diode is a 600V Schottky also.
One item to note is when I start pulling a lot of power the high side IGBT will get warm.
Any input would be appreciated.
AI: As WhatRoughBeast and JonBR said, it is the fact that Pin 11 is not connected to the output. I made the connection and it worked as expected for 5 seconds till my wire slipped and I blew out the driver.
Amazing how fast another set of eyes can find a mistake.
|
H: Modifications to 555 LED Fade Circuit
I was looking for a circuit to fade an LED on and off using no microcontrollers, and found this nice circuit:
http://www.555-timer-circuits.com/up-down-fading-led.html
it seems like exactly what I was looking for, so I went ahead and used it with a few changes:
I used a 2n2222 as the npn transistor instead of the BC547
I used a 12V source instead of 9 volts because I'm using LEDs with from a 12V led strip (led + resistors all built into the strip).
I used a 556 instead of a 555 because I had them on hand (and
I built the circuit as above (with the pinout changed accordingly for the 556), and it didn't work. I was a bit surprised, and double-checked all of the connections -- but they were all correct.
Would any of the changes altered how the circuit functions?
I'm a bit concerned that changing the source to 12V might have effects, since that changes the threshold and trigger voltages -- could that be why the circuit isn't working?
Should the leds and resistors be attached to the collector instead of the emitter of the 2n2222? I remember reading that LED driver circuits should be designed this way (something about V_be, IIRC).
Any other ways that changing the circuit may have prevented it from functioning?
thanks!
AI: A simple analysis of the original circuit and the substitutions you made goes something like this.
The 555 circuit shown is a squarewave (1:1) oscillator. The frequency being determined by R1 and C1. The voltage at the junction between these two components changes between 2/3 rds and 1/3rd of the supply (set by input pins 6 and 2). For a 9V supply this will be 6V & 3V. Changing the supply to 12V gives 8V and 4V. The voltage varies in an approximately triangular waveform.
This voltage is connected to the the base of an NPN transistor, Q1 (in this case a BC478.) Just about any small signal NPN would do so a change to a 2N2222 type would have very little effect on the operation. The transistor is connected as an emitter follower so the voltage at the emitter would be 0.6V lower than its base voltage. For a 9V supply this voltage would be between 5.4 and 2.4V. For a 12V supply this would be between 7.4V and 3.4V.
The 556 is just a double 555 in one package so using one half of the chip should still work (provided you have the correct connections).
Your last substitution is where it all goes wrong - using a 12V LED (with built in resistors.)
A normal (visible) LED needs between 2 - 4 Volts to turn on. This voltage depends upon colour (i.e. the energy gap). R3 is chosen to limit the maximum current through the LED (and transistor). The 9V circuit matches the 5.4V - 2.4V range very nicely for something like a RED LED (about 2V). For the 12V supply you would be better off using a COOL WHITE or BLUE LED (about 3.8V). The R3 value of 470R is fine. For a 12V circuit and WHITE LED (3.8V turn on) this gives a maximum current of (7.4 - 3.8)/470 A or about 7.7mA. Compared with (5.4 - 2)/470 A or about 7.2mA (RED LED, 2V turn on).
|
H: Get +12/-12VDC out of 24VDC Supply
I'm trying to use a single 24VDC power supply but rig it to give a heating coil the equivalent of 24Volts but at +12/-12 in reference to ground for safety reasons. Is this as simple as a voltage divider with a high value resistor to ground? Would that work or just kind of short out the supply? Is the negative terminal of a power supply normally connected to ground or is it floating? Let me know if there's a cheap, simple way of accomplishing this aside from buying two 12V power supplies or one that outputs +/-12VDC. Thanks.
AI: It depends on your power requirement. This question is a very classic one, a quick search on Google with the right keywords gave me this first result which explains why a simple voltage divider is not enough in most cases: the more you try to minimise the quiescent current of the divider, the less stiff the symmetry of the rails is (there is an offset in the virtual ground), as the offsets will increase with current draw.
Below are two solutions suggested by the author.
Even there the voltage divider should be replaced with a voltage reference (e.g. TLE2426) if the power draw is asymmetric (but it sounds like it won't).
You'll certainly recognise a push pull structure here, common in audio.
However this will NOT make the system safer, as there is still 24V across the heater. But 24V is still safe-ish in a dry environment.
|
H: Can you setup UART Reception on dsPIC33EP512MU814 on eval board dsPICDEM MCLV-2
According to microchip, the dsPIC33EP512MU814 can be integrated with the dsPICDEM MCLV-2 Development Board using dsPIC33EP512MU814 Plug-In-Module
However, according to the dsPICDEM MCLV-2 Interface map the UART Rx and Tx of the evaluation board are pins 49 and 50, respectively. So then according to plug-in-module interface map that UART Rx must be mapped to pin SDA2/RP100/RF4, and UART Tx must be mapped to SCL2/RP101/RF5.
I'm aware that you can map out UART Tx to RP101 by simply assigning
RPOR9BITS.RP101R = 1;
However I was expecting the pin of UART Rx to be RPIx not an RPx.
Does this mean you can't perform UART Reception?
AI: From the datasheet (emphasis mine):
The number of available pins is dependent on the particular device and
its pin count. Pins that support the Peripheral Pin Select feature
include the designation “RPn” or “RPIn” in their full pin designation,
where “RP” designates a remappable function for input or output and
“RPI” designates a remappable functions for input only, and “n” is the
remappable pin number.
Since an "RPx" pin can be either input or output, you can remap the UART Rx to it.
RPINR18 = 0x0064; // UART Rx PPS -> RP100
|
H: Buffering a constant current source IC to increase maximum voltage rating
The actual question: Which of these configurations, if any, is the most effective for my goal of increasing the maximum voltage rating of the outputs of a constant-current driver IC?
As established in a previous question, a constant current source IC may specify a maximum voltage rating that refers to the entire voltage across the load rather than the voltage remaining after what the load drops. For example, when the specs for TI's TLC59281 specify VO (voltage applied to output) of 17V, this indicates that a string of LEDs connected to an output must be supplied with no more than 17V, regardless of the voltage drops of the LEDs.
An application circuit I'm presently developing already makes use of the '59281 for a few single- and dual-LED outputs at 20mA. I'd like to add several 10-LED output strings with, say, a 48V supply, but the IC alone isn't rated for such.
One comment on the answer to the original question indicated that a simple buffer—specifically, a common-base NPN buffer—might be all that is necessary to work around the rating. After a little investigation, I've come up with some possible variations on the topology and need some insight as to which might be the most useful.
Pictured are some possible configurations. All transistors pictured are rated for 65V or greater.
(A) The way the driver is expected to be run, unbuffered at less than 17V.
(B) The way the driver would ideally run, unbuffered at more than 17V. As established, this is not supported.
(C) Buffered with NPN common-base buffer, base voltage of 1V. Ideally the driver output sees no higher than about 0.3V. Output current is slightly less than driver current (dependent on hFE of transistor) but this effect is hopefully negligible.
(D) Same as (C) but with base voltage of 5V. Ideally the driver output sees no higher than about 4.3V. Seems to work about the same in Falstad as (C); unsure about any actual advantages/disadvantages.
(E) Mostly wishful thinking—this configuration would have output current roughly equal to input current. However, this doesn't fix the base voltage below 17V, so it's entirely possible that this is ineffective for increasing the maximum voltage rating.
EDIT: As indicated by one answer, the TI application note SLVA280 describes two similar solutions to this very problem. The following is my digest of that app note.
(F) is similar to (D) but uses an N-channel MOSFET. According to the app note, the gate resistor is included to suppress oscillations caused by fast switching (and might even be omitted for a slower FET).
(G) is similar to (D) but includes a base resistor. The resistor is selected to minimize base current while still allowing the full maximum LED current on the collector.
The tradeoff is essentially precision versus cost. The MOSFET in (F) can cost substantially more than the BJT in (G). But (G) is far more sensitive to the value of R and the current gain of the transistor (which itself tends to be rather loosely specified) while (F) appears to be more forgiving.
(By my cursory examination, at voltage ratings of above 60V or so, the differences in price are rather less pronounced, so the MOSFET version is probably the way I'll go.)
For (G), R is defined thus:
\$\frac{\left(V_{CC}-V_{BE}\right)\beta}{I_{LED\_OC}}<R<\frac{\left(V_{CC}-V_{BE}\right)\beta}{I_{LED\_max}}\$
where \$\beta\$ is the current gain of the transistor, \$I_{LED\_max}\$ is the maximum LED current, and \$I_{LED\_OC}\$ is an "overcurrent limit" arbitrarily defined as 1.2 to 1.3 times \$I_{LED\_max}\$.
AI: TI document SLVA280, "Using TLC5940 With Higher LED Supply Voltages and Series LEDs", describes a couple of ways to use a constant current sink device with a voltage beyond what it is rated for.
Both involve a N-type transistor in series with the sink with a base/gate resistor to Vcc. The gate resistor prevents oscillation of the MOSFET solution, and the base resistor is sized such that the BJT passes slightly more than the desired current based on its current gain. The operation of both solutions are described in full detail in the document.
|
H: How do I wire my 4066N analogue switch
I plan to use one of these to throw two switches at the same time:
http://www.digikey.ca/product-detail/en/74HCT4066N,112/568-7854-5-ND/1230913
The switches should be closed when the voltage from this debounced circuit is high:
Which diode to use on my RC switch debounce circuit?
This is the pin layout from the switch documentation:
Which wires connect to which pins? These are all the wires I will use:
5V DC [pin 14, Vcc]
Ground [pin 7, GND]
Schmitt Trigger output [???]
Circuit 1 wire 1 [???]
Circuit 1 wire 2 [???]
Circuit 2 wire 1 [???]
Circuit 2 wire 2 [???]
Upon trigger, I would like Circuit 1 wire 1 and wire 2 to be connected at the same time that Circuit 2 wire 1 and wire 2 are connected.
Please let me know if anything else seems wrong with what I'm trying to do here. Thanks for your help!
AI: Each of the nY and nZ connections are connected internally, with the nE pin controlling them. So, connect both wires of circuit 1 to nY and nZ pins with the same value for n, do the same for both wires of circuit 2, connect the control output to the corresponding nE pins, and tie the other two enables high or low.
And of course make sure that the circuits do not violate the absolute maximums given in the datasheet.
|
H: Long-range RF comm with decent data rate
I'm building a quadcopter powered with Arduino. For testing purposes, I'm currently using bluetooth. In the future, however, I would like to have some form of fairly long range communication that also has a respectable data rate. The reason is that eventually I'd like to attach a µCAM to the copter for FPV (probably will end up being low-bit grayscale). I have looked at the Synapse SM700, but it appears to be discontinued, and complicated to use with Arduino. I'm using an Arduino Pro Mini. Another option I'm considering is taking a transmitter-receiver pair from a regular quadcopter controller- something similar to this. Would I be able to send video over it? It seems I'd need to do something to convert digital signals from the camera to aux signals. Would it be possible? Transmitter (receiver can be fairly large as well). In short, I'm looking for an RF system with 1-2 km. of range which I can use to send low-quality grayscale video and control signals to the quadcopter. Transmitter can be any size, and receiver can be fairly large/heavy. Is there anything on the market like this, or am I out of luck?
AI: There are any number of WiFi or 2.4 GHz ISM band no-licence-needed transceiver modules available.
Cheap and probably suitable are the many NRF24L01 modules with a suitable antenna.
Here is a good example of a DIY system. This is actually SPI - SPI requiring some work on your part for interfacing.
https://achuwilson.wordpress.com/2012/05/18/long-range-rf-link-using-nrf24l01-rf-transceiver/
Typical 2.4 GHz ISM band NRF24L01 module - this one with external antenna. Often under $10 each. Or two for $5 from China. (The external antenna connector is a significant contributor to the cost :-) ).
Here the antenna design is the "enabling technology".
This is essentially a bent piece of Copper wire (albeit bent in a rather special way) plus an Al sheet groundplane.
He reports ~= 2 km line of side which is presumably ground-ground. Ground-air should be substantially greater. At 2km your copter would be invisible and you'd be an order of magnitude or so being what most licence arrangements would allow.
There are even cheaper units that provide either UART interfaces with ease or full WiFi interfaces with due digging through the enthusiast literature. eg ESP8266 IC based modules cost typically about $US5 to $10 retail (SEED studio et al) and can be used as a UART bridge but contain an accessible embedded microcontroller with 802.11 stack.
Example - this one with on board antenna. Ranges of 2+ km ground-ground are reported using a "rubber ducky" 1/4 wave loaded aerial at portable/mobile end.
cropped.jpg
|
H: How to I wire this particular variant of 2.5mm female jack?
The second image has the pins not the top one.
Now the pinouts are a bit confusing here. Now, I've wired a jack before but none like this. Can someone please help me out?
AI: As you insert a plug into the jack, some of the pieces of metal inside will move along the outside of the plug, and some will remain stationary. The pieces that move are the ones you need to solder the signal to, and the others are connections to the moving pieces when there is no plug in place. The specific application will dictate whether or not the stationary pieces should be soldered to.
|
H: Do electret microphones have polarity?
So do electret mics have polarity?
The following circuits seem to contradict each other...
1)
2)
Now the problem is one image shows polarity while the other one does not!!
What's the holdup here?
AI: Yes, they have polarity and it has to be right to work- the output needs to be biased positive with respect to the ground terminal.
The ground/GND terminal should be common with the case, so you can check polarity with a multimeter.
Here, from a Panasonic Datasheet, is a typical arrangement:
The two diagrams you show are equivalent, one is just drawn upside-down (potentially.. as it were.. confusing and not to be encouraged, but still valid).
|
H: What encoding is used in this signal?
I have a cheap wireless pool thermometer (AcuRite 617 1) and I'd like to intercept the temperature data at the receiver and use it with a computerized data-logging system.
Conveniently, inside the receiver is a small break-out board that is connected to the antenna and has digital "V", "G", "D", and "SH" pins:
Here is a segment of captured data from the "D" pin during a transmission (these happen once per minute). Before this segment, there is what appears to be much-higher-rate data, but I believe that might be noise -- this is the beginning of the 1.36kHz / 680Hz data.
I've googled a bit and can't find an encoding that looks quite like this, but if I were to guess what's going on, here's what I'm thinking:
the initial 4 cycles of 680 Hz are to synchronize the clocks but contain no data
the 13 cycles of 1.36 kHz (2x the initial rate) that follow appear to have one of two forms: they either drop low before the midpoint of the cycle or after it -- i'd assume one form is a logical one and the other is a zero.
after that, there appears to be a weird gap, but if you discount the part of the low that is part of the preceding "1", then the remaining gap is 735 µs, which is a (phase-correct!) continuation of the 680 Hz preamble.
Am I looking at this correctly? Is there a name for this encoding?
Some further notes on the break-out board:
the board is marked "RF211" and looks remarkably consistent with the MICRF211 "general purpose, 3V QwikRadio Receiver that operates at 433.92MHz"3
the MICRF211 data sheet has the following figure (with very little explanation), which looks tantalizingly like what i'm seeing except for the double-data-rate square wave as compared to my capture:
2016-02-14 Update: I've revisited this project and appear to be getting a clean 64-bit stream between a 4-cycle preamble and a 1-cycle "postamble", after which the display board shuts down the RF module by pulling ^SH low (top line):
According to Micrel's "33/66% PWM" scheme (which appears nowhere else on Google), that's
-_-_-_-_0000011110011000110000000000000000000000100011101000010010101010-_
So now I have to start manipulating the temperature to decode the bits. Here ("x") are the bits that seem to change without any apparent change in the display:
0000011110011000110000000000000000000000100011101000010010101010
------------------------------------------------x----xxxx----xxx
I assume these are either least-significant bits or battery-level (which is only shown as "Low" when it drops significantly).
2016-02-15 Update: I'm taking the show on the road to give the new "Reverse Engineering" stackexchange a crack at determining meaning: https://reverseengineering.stackexchange.com/questions/12048/what-is-contained-in-this-transmission-rf-pool-temperature-sensor-base-unit-re
AI: Micrel refers to it as a 33/66% PWM scheme. It appears to be a fairly simple, but ad-hoc protocol.
PWM stands for pulse-width modulation. There is a Wikipedia page that goes into more detail, but in short, PWM is where you keep a fixed period, so here it is the time from rising edge to the next rising edge, but you vary the percentage of time spent in the high state by changing when the falling edge occurs. For this one, you can see that it is 33% high for a '1' and 66% high for a '0'.
The initial series of pulses are equal high and low times. This is usually done to allow the receiver to sync up before actual data is received.
See http://www.micrel.com/_PDF/App-Notes/an-22.pdf for some more details on what they expect for the module.
A typical way to be able to receive this sort of encoding would be to input this into a timer input capture pin of a microcontroller. Or, you can simply connect to a general input and have it sample at 4-5x the PWM period. The algorithm for decoding is not too hard from there.
Alternatively, as suggested by markt, you can work your way back to the temperature sensor itself. But, if it is an analog output signal, you will have to convert it to digital yourself and may have slightly different numbers in your logging from the original output.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.