text
stringlengths
83
79.5k
H: Why is ground shared between primary and secondary side in LT3750 circuit example LT3750 is a capacitor charger controller IC. The sample (typical application) circuit in its datasheet shows an otherwise isolated secondary side sharing a ground with the primary side. A PCB layout suggestion for the same circuit later also shows that the ground is shared, so this is an intentional part of the design, not a mistake. What's the purpose of sharing a ground like that? Wouldn't it negate the benefits of isolation and make the lower voltage primary side susceptible to destruction by the high voltage from the secondary side (if the positive rail of the secondary somehow gets close to the primary side or sparks)? AI: The simple answer is that is a just an example circuit. LTSpice needs you to have ground path for each node, easiest way to do this is put the standard grounds on both sides. To simulate isolated grounds, put a very large (1,000,000,000 ohms or so) resistor and a very small (1 pico farad) capacitor in parallel. This gives you an isolated ground as you would have in real life. The LT Test Fixtures don't bother with this to keep the diagram clearer, and to make the simulation output nice and clear. Obviously you will then have to give a label to the isolated ground, and measuring the voltage relevant to the isolated ground will require putting an equation in the simulation output to make the voltage you're interested relevant to the ground you are interested in.
H: crypto benchmarks STM32L4 Can someone give me an indication on the performance (time) of ECDH executed on the STM32F4 board. The STM32F4 has hardware acceleration for ECC (http://www.st.com/st-web-ui/static/active/cn/resource/technical/document/user_manual/CD00208802.pdf) but I can't find any benchmarks. [background] I want to setup a secure channel between a smartphone and an embedded device. The embedded device will use the STM32F4 MCU. The communication link will be established over NFC. NFC has an inherent protection against man-in-the-middle attacks so we can use a standard ECDH to establish a symmetric key and encrypt further data exchange. To make sure that ECDH is feasible on the STM32F4 I am searching for some benchmarks. AI: NFC has an inherent protection against man-in-the-middle attacks This is not true. There is absolutely nothing in NFC that would protect you from man in the middle attacks in any way. Regarding your main question: doing ECDH on any modern microcontroller is much faster than your NFC link can transfer the data, so don't worry about performance here.
H: Does input offset voltage for amplifiers remain almost same In Op-amp datasheets, input offset voltages are specified. Does this input offset voltage remain same (no significant difference) for each IC or can it change significantly each time I use the device? e.g. if the offset is defined to be ±1mV in datasheet and I measure it +0.5mV for one IC. Can I expect it to remain +0.5mV (±some uV) for that particular IC within its lifetime or can it have any value between +1mV to -1mV every time I switch on the device? AI: It should be fairly stable for a given chip, when measured at the same temperature, supply voltage and common-mode voltage (and assuming the chip is not damaged by overheating, electrical transients etc.). There will be some drift over time, but it should not be large for most op-amps. Some early CMOS-input op-amps had significant long term drift if they were exposed to large differential voltages, but that's more the exception then the rule. At different temperatures, the offset voltage will be different, and the limits on that change are usually specified by a parameter such as TCVos, in microvolts per Kelvin. Here are some specs for a typical precision op-amp (OPA177): Here they specify the long term drift as typical 0.4uV/month, the maximum Vos over the entire temperature range and the maximum/typical drift with temperature. The temperature drift spec is usually done using the 'box' method, where an imaginary box drawn around the offset voltage graph from -40 degrees C to 85 degrees C has a height that is not to exceed 150uV, which represents 1.2uV/degree C. The total offset must not exceed +/-100uV at any point in that range and the offset at room temperature must not exceed +/-60uV. In practice the curve will be smooth and often will be monotonic. Note that there is no guarantee that the slope of the curve will not exceed +/-1.2uV degree C, only the average over the whole temperature range is guaranteed. Cheap general purpose op-amps will have much larger offsets and drifts, and often the drift with temperature and time is not specified, but the principle is the same. Typically the larger the initial offset voltage of an untrimmed op-amp the larger the drift with temperature will be.
H: Heat Sink Isolation What is the best strategy for heat sinking a MOSFET? I have a heat sink all picked out here. I also have a MOSFET picked out here. I know that the tab is electrified with the drain. There are a few choices on how to electrically connect the MOSFET to the heat sink. Set the heat sink to the same potential as the drain and not isolate the tab. Set the heat sink to the same potential as the drain and isolate the tab with a sill pad. Leave the heat sink floating and not isolate the tab. Leave the heat sink floating and isolate the tab with a sill pad. Ground the heat sink and isolate the tab with a sill pad. What is the best option? Am I missing anything? The MOSFET will be used along with an MCP73844-8.4 to charge a 2 cell lithium ion at 1A. From my understanding the chip throttle the MOSFET and does not use any high speed switching. The input voltage for charging is going to be around 10V. There will be no enclosure. The heat sink will be upright in open air. AI: What is your circuit topology? Heatsinks almost always float (but are capacitively grounded) or are grounded. I have never seen them connected to MOSFET drain. Under PWM, the heatsink potential would swing up and down, which would draw immense leakage currents through the system. On top of introducing a lot of electrical noise it would also be dangerous. Use option five- ground the heatsink or option four (but use caps to provide some current path) and use Sil-Pads (http://www.digikey.com/catalog/en/partgroup/sil-pad-k-4-series/1298). The benefit of floating heatsink is the decreased parasitic capacitance. The MOSFET has some capacitance to the heatsink and the heatsink has some capacitance to ground/DC bus negative/DC bus positive. These two capacitances are in series, which effectively results in lower capacitance and therefore lower leakage currents. It seems that you understand the implications based on your five options.
H: Need to find the component with marking 500 I am having hard time in finding what this component is . It has 8 pins and a marking on it says 500. It has 2 pins on bottom on each side and two on each top side. On the board 2 pins on left bottom are as inputs and two on the right are used as outputs. I searched for it on SMD codeBook as well as this site but couldn't find anything. I hope some one can help here. It is part of a rs485 circuitry on a board. There is another Sot23 marked W216j I couldn't find this also. This is how they are connected. Please ignore the pin names on SOT23/IC1 **UPDTAE"" I have got the product and its the exact one by help provided by @Tom and @jp314 This is the product that was marked 500, The SOT package is quite lear but still I couldn't get the exact one. AI: I'm going to post this as an answer as I've seen the same type of question come up multiple times, and for each I've basically commented the same thing: It's pretty much impossible to identify this sort of component unless you either have a BOM of the circuit, schematic, or already know what they are. If you have some rough idea of the function they perform and it is something more complex than say a discreet transistor, then you may be able to identify them by searching for parts with that functionality on manufacturer websites and comparing marking codes from datasheets. But really this is a shot in the dark. Having said that, based on your description, the SOT23 package device is almost certainly some sort of TVS device - used for ESD and Transient (EFT) protection on inputs. These are quite common on RS485 devices as per this TI application note. There are components like this specifically for RS485 in the same package like this one. Doesn't have the same marking code, but will like have the same function. The other package as pointed out by @jp314 in the comments is likely some kind of line filter. As you have found WE has some in a similar package with similar markings, so this is quite clearly what it is. In fact looking at the datasheets from that link, this part has a marking of 500 and looks identical to your picture. For both of these parts, it could be inferred from the circuit what the parts do. Finding a datasheet for the exact part is the hard bit.
H: Use diode to drop down voltage from ~3.7V for nRF24L01 I have a RF module (nRF24L01) and Li-ion battery with 3.7V nominal voltage, so when it is fully charged it can be a bit higher than 3.7V (I guess). RF module can work from 1.9V to 3.6V and draws <13mA. I know about LDO regulators, but I don't have such, rather don't want to buy it and I want to keep design simple. So, becasue of 1.9-3.6V and low power consumption can I use simple, regular diode, like 1n4007, on Vcc line, to drop voltage? Typical voltage drop is 0.7V. I know it varies with current etc. but I only need to lower it a bit and can do it up to 1.9V. So? AI: So a Li-ion battery typically charges to 4.2v. You would have to look at the Vf vs I curve for the specific diode to see if it dropes the voltage enough. Based on the current you mentioned, with only 1 diode in series it would be very close to the upper limit, so you would really need more like 2 diodes. That being said, LDO regulators are cheep for only 100mA, it really would be a better solution. Also keep in mind if you are charging the Li-ion battery in the circuit, the charge voltages could be higher than 4.2V depending on the specific charger. A similar solution that could be better if you only want a single component would be to put a zener diode in series with the power, reverse biased so it dropes the zener voltage. This would be more accurate and constant then a diode.
H: Will this phototransistor circuit work? I'm building an 8 stage coil gun for a physics project, and I want to verify this circuit as I've never used a phototransistor before and I'm still pretty new to mosfets. Here is a schematic: The idea is that when the projectile (a penny in this case) crosses between the ir led and the phototransistor, the coil triggers until the penny exits the photogate. It will also light the 6 2v red LED's for the duration. I'm not sure that Iwired the mosfets to the phototransistor properly. I know that the load should be attached between v+ and the collector of the phototransistor, but that would require routing the potentially 100 amps of coil current through the phototransistor and remove the purpose of even having the mosfet there. would putting a pullup resistor between the collector and 5v and attaching the Mosfets' gates to the emitter work as intended? As a side note, the coil will only be on for a tiny fraction of a second. If I hook up the red LED's to the red channel of rgb LED's that have blue on constantly, will a purple wave travelling down the stages be visible, or will I need to extend the red channel pulse? I could use a capacitor charging circuit between two NAND gates I guess, but I want to minimize unnecessary circuitry so it can all fit in a small-ish package. Thanks for the help; I'll probably be back shortly with some troubleshooting for the actual coils haha. EDIT: this is the phototransistor I'm using: https://www.sparkfun.com/datasheets/Components/LTR-301.pdf EDIT 2: could I hook up 12 v to the phototransistor, or would I risk overvolting the mosfet? I need to break the mosfet's activation voltage, which the pullup resistor might prevent from happening.... Also not having to route a 5v line helps. AI: Is the penny blocking the light or reflecting? Depending on what its supposed to do you might have it reversed. When the phototransistor receives light and conducts the FETs would be ON. Except that you have a voltage divider across the gate: R3 + transistor + 1k, then gate, and then 10k. So you'd be lucky if your gate saw 2.5V in this config, which is a bit low to turn mosfets on. I'd use smaller gate resistors and a smaller R3. Also I'd add a small resistor in series with the LEDs just in case.. p.s.: yes you can use 12V at these FETs gates. Both are rated up to 20Vgs.
H: STM32 ADC conversion using HAL I am trying to learn how to use "new" HAL library from stm32. When I try to do simple ADC conversion it works just one time, but then it stops converting. I suppose End of conversion flag does not get set. I am using STM32f429I Discovery board, which has STM32f429ZI on board. Note that I know about sprintf being bad practice and making adc with interrupt is better, I know that, please don't point it out, this is not relevant to the question, I am just testing HAL here. So the question is why EOC flag is not set or what could I do to make it work? Googling is not helping much since very few good materials about HAL out there. Here is the code: __IO uint16_t ADCValue=0; void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef* hadc); int main(void) { char str[15]; /* Various initializations */ HAL_ADC_Start(&hadc1); while (1) { if (HAL_ADC_PollForConversion(&hadc1, 1000000) == HAL_OK) { ADCValue = HAL_ADC_GetValue(&hadc1); sprintf(str, "%d", ADCValue); BSP_LCD_DisplayStringAt(130,30, (uint8_t*)str, LEFT_MODE); } } void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef* hadc) { ADCValue = HAL_ADC_GetValue(&hadc1); } I also created the project with CubeMX, adc configuration is the following: EDIT 1 I tried to debug everything and it seems that program gets stuck into checking for EOC flag - it sees that it is not shown and therefore issues timer waiting for EOC to show up(but it never gets set) Here is the code where it gets stuck in debugger: /* Check End of conversion flag */ while(!(__HAL_ADC_GET_FLAG(hadc, ADC_FLAG_EOC))) { /* Check for the Timeout */ if(Timeout != HAL_MAX_DELAY) { if((Timeout == 0)||((HAL_GetTick() - tickstart ) > Timeout)) { hadc->State= HAL_ADC_STATE_TIMEOUT; /* Process unlocked */ __HAL_UNLOCK(hadc); return HAL_TIMEOUT; } } AI: In your original code, set the End of Conversion Selection to disabled. hadc1.Init.EOCSelection = DISABLE; It turned out that #define ADC_EOC_SEQ_CONV ((uint32_t)0x00000000) value is equal to DISABLE. So actually the EOCSelection should be configured as: to be able to poll the ADC multiple times. Then you can read the ADC continously without stopping and starting the ADC: int main(void) { HAL_Init(); SystemClock_Config(); ConfigureADC(); HAL_ADC_Start(&hadc1); while(1) { if (HAL_ADC_PollForConversion(&hadc1, 1000000) == HAL_OK) { ADCValue = HAL_ADC_GetValue(&hadc1); } } } This way it worked fine for me. Since HAL is a quite new library there are not a lot of resources to be find but not impossible. I learned a lot from this tutorial, it demonstrates all possible ADC useage step by step; from simple polling, to using interrupts and DMA.
H: Having trouble with light activated circuit. R1 = 10K 1/4W Resistor R2 = Photoresistor (any type) R3 = 2M2 1/4W Resistor R4 = 1M 1/4W Resistor C1= 10µF 25V Electrolytic Capacitor C2 = 100nF 63V Polyester Capacitor D1 = 1N4148 75V 150mA Diode IC1, IC2 = 7555 or TS555CN CMos Timer ICs BZ1 = Piezo sounder (incorporating 3KHz oscillator) B1 = 3V Battery (2 x 1.5V AA, AAA or smaller type Cells in series) When a beam of light enters from the opening, or the fridge lamp lights, the photo resistor lowers its resistance (<2K) stopping C1 charging current. Therefore IC1, wired as an astable multivibrator, starts oscillating at a very low frequency and after a period of about 24 sec. its output pin (#3) goes high, enabling IC2. This chip is also wired as an astable multivibrator, driving the Piezo sounder intermittently at about 5 times per second. The alarm is activated for about 17 sec. then stopped for the same time period and the cycle repeats until the fridge door closes. So I'm trying to build this circuit and have a question - Does this circuit start sounding the alarm 24 seconds after the light enters or just after? I don't quite understand it and also how should I calculate to get the values 24 seconds and 17 seconds. Thank you in advance. AI: Understanding how the circuit works requires that you know how a photocell responds to light. I'm assuming that you are using a standard CdS photocell. These have high resistance in the dark and the resistance drops to a low value in the presence of light. When the photocell is dark, timing capacitor C1 is held High by R1. When the photocell is exposed to light, the voltage across the photocell drops to a low value. D1 prevents C1 from being discharged by the photocell. Instead, C1 slowly discharges via the 1M resistor. When the voltage across C1 drops to about 1/3 the supply voltage, the output of the 555 timer goes Hi and turns on the tone generator.
H: Input Impedance Question As I look at the Datasheet of a differential amplifier for input impedance, i see below statement. 0.8||2 for Differential. 0.4||2 for Common mode. What does GΩ||pF mean? AI: Input resistance, expressed in GigaOhms; in parallel with stray capacitance, expressed in PicoFarads
H: When to use an active filter instead of a passive filter? I want to build a low-pass or a band-reject filter to remove 50/60Hz humming noise. I couldn't decide where to start with. I will use this filter to remove 50Hz noise where I make data-acquisition of a signal through a BNC cable. Active or passive filters both could be used but I don't know the trade-offs. AI: In many digital sampling applications it is possible to remove mains noise by sampling the signal at twice mains frequency and averaging the result of each pair of samples. The image shows hum superimposed on a steady signal. Each black 'x' is a sampling point and it should be reasonably obvious that sampling at any odd multiple of the half-wave time (every half-wave, every 3, every 5, etc.) will result in samples that alternate above or below or exactly on the signal. Averaging each pair of readings will eliminate the hum. This approach is commonly used in industrial signal applications such as temperature controllers where a very low voltage signal is being measured and the sensor wires run in close proximity to mains cables. If you were designing a product for the international market you may wish to make the sampling switchable between 50 and 60 Hz. Even better would be to pass a mains zero-cross signal from the PSU to the controller so that it automatically selected the right frequency.
H: Circuit changes, what is the input high and low given when the circuit is lit and not! One change in the circuit, the R1 is not 4300 ohms but instead 4700 ohms. The voltmeter is the input into the raspberry pi GPIO board. What would be the input (voltage) when light gets to the phototransistor and when not? PT0 is the ir phototransistor (my real circuit has no base, only two leads.) For any more information please ask for clarification. Thanks AI: A phototransistor works exactly like a regular transistor would, except instead of a base powered by an input bias current, it is turned on and off by light. A NPN type phototransistor will, when lit, turn on, connecting the collector to the emitter. R1 is a pull-up resistor. It is used to bring a node up through the pull-up resistor to a certain voltage. In this case, the node is the connection between R1 and the Phototransistor's Collector, and R2. R2 is a series resistor. It will simply limit current flowing through it, and in this case completely optional/unnecessary. When the phototransistor is on, as mentioned, the collector and emitter are connected in a very low resistance connection. As this is the path of least resistance compared to the high resistance of R1, the node at the Collector pin, will be pulled to it. Since the Emitter is tied to ground, that makes the node tied to ground.
H: bc 547 and bc 548? I was doing a 3*3*3 arduino cube project when this doubt struck me. Some instructables I watched used 2N3904 transistor whereas a few used BC547. I hava a 548 at hand and was thinking whether I could use it. I use a variety of leds - red,blue,green and yellow. So current may range between 15 to 25 mA I guess. Can BC 548 switch in these ranges? The terms hfe, beta, current gain, transconductance still confuse me. Guess I'm not so thorough with my basics. A little bit of googling has left me in utter confusion. Can somebody please shed some light upon the matter? AI: You will be fine with BC548, BC547 or 2N3904 your application is not demanding. The history of this comes from the original metal can NPN silicon devices that were called BC107, BC108 and BC109. These numbers are in ascending gain and descending breakdown voltage. According to semiconductor physics gain and breakdown voltage are interchangeable. High voltage transistors tend to have lower gain and low voltage transistors tend to have higher gain. The gain differences are not huge when comparing the 547 & 548. Early analogue circuits that had no negative feedback using the minimum number of BJTs were much more gain sensitive and ran on 9V so the BC109 with its hfe of 900 was sought after. According to folklore the manufacturer put the numbers on the transistors after they were automatically tested. Packaged transistors that didn't make the specification in one or more aspects were sold to hobbyists as F PACK's and they were big cheap bags.
H: STM32F3 Discovery + GNU ARM plugin + OpenOCD - cannot load the binary onto the board I'm using the GNU ARM plugin for Eclipse with Open OCD as a debugger. This is on Windows 10 x64. The problem I'm facing is the following error when trying to debug or run the sample program: Error in final launch sequence Failed to execute MI command: load C:\Development\stm32-test\Debug\stm32-test.elf Error message from debugger back end: Load failed Failed to execute MI command: load C:\Development\stm32-test\Debug\stm32-test.elf Error message from debugger back end: Load failed Load failed There are several kinds of STM32 project templates that the ARM plugin offers. Most notably, there's the "STM32Fxxx C/C++ project", and it works out of the box no problem. But it's bundled with the old version of STM32 library. I wanted to use the latest STM32F3Cube, so I used the other template - "Hello World ARM Cortex-M C/C++ project", as per the recommendation of this article. It's designed for use with the STM32F3Cube, and it's the one I can't load onto the board. Please tell me what further information is needed for dealing with this issue, or how I can collect more detailed logs etc. P. S. I have compared the debug configuration settings between the working and non-working projects, and found no difference. Same .cfg file, along with everything else. Is my .elf being rejected because there's something wrong with it? AI: From the error message it seems that the debugger cannot locate where the Flash begins, so it fails to load the file. In the F4 HAL Eclipse template this Flash origin address is set correctly for an STM32 but because you are using a general Cortex-M project, and so this address is not correct for an STM. This step is described in the article you have mentioned. From the article: We need to configure how the application is mapped in the MCU memory. This work is accomplished by the link-editor (ld), which uses the three .ld files inside the /ldscripts Eclipse folder. The file we are interested in is mem.ld, and we need to change the FLASH origin address from 0x00000000 to 0x08000000, as shown below: ... FLASH (rx) : ORIGIN = 0x08000000, LENGTH = 512K ... This value can be obtained from the datasheet. Here is the relevant part:
H: Why do we need to install a power transformer for each high building? I live in area where many high buildings are being built. Each building is more than 10 floors. A special room is made next to the main entrance or the garage. The room is designed for installing a power transformer inside it. The room also provides good ventilation. The primary coil (winding) of The transformer is 11 KV and the secondary coil is 380/220 V. Why do we need to install a transformer for each building? AI: Well, if you didn't have a transformer how would you convert the relatively very, very lethal 11 kV into the definitely less lethal (but still rather lethal) 380/220V? If, on the other hand, you meant that somewhere down-town you converted 11 kV to the relatively-safer lower voltage and fed this to all the high rise buildings then, the cable losses would be massive because the currents would be 50 times higher. Or maybe you're thinking that consumers convert all their appliances to 11 kV - that just aint gonna happen for reasons of safety (irrespective of the massive global cost). Another half-reason is that the 11 kV network is transmitted as 3 phase balanced i.e. it doesn't have a neutral connection: - It's transmitted as 3 phase due to economies of making 3 phase generators in the hundreds of mega watt range. But, households in the UK require a single phase and neutral and this is from a historical safety standpoint (in the UK and Europe at least). To create a neutral wire requires a transformer (delta to star configuration) so there is also the cost of extending the neutral wire if only one transformer fed several high-rise buildings. Also, one fault in one building might trip the whole building out but it wouldn't trip a whole precinct of buildings out. Same is true from a maintenance standpoint - it's less disuptive to isolate a single building than a whole precinct. I would also imagine, with high rise buildings (subject to lightning strikes) that having a local transformer for each building will be less disruptive to other buildings not hit by a strike.
H: Switching multiple outputs with a single input Here's my problem. I need a way to switch between two inputs of four signals each (four 5V lines and 4 3V3 lines). They need to be switched together, and ideally with a single jumper or an electrical signal. I know there are many ways to accomplish this (e.g. by 4-sets of 3-pole jumpers), but I'm wondering what you think the most efficient way is. To put it in the context of the image below: depending on whether S is on or off, the outputs O1..3 should be connected with A1..3 or B1..3. What part does this most efficiently? simulate this circuit – Schematic created using CircuitLab AI: I would use a simple quad 2:1 multiplexer. Something like a 74157 if I understand you correctly, it should operate okay at those voltages but it would be best to check yourself.
H: Can an inductor be substituted for a coil I am designing this FM receiver circuit. I have a few questions related to the circuit. Buying coil in my locality is a little expensive. So would it be alright to substitute an inductor in its place? If yes, would it be alright to use a 0.2 micro-henry inductor? And also what should be the capacitance range of the variable capacitor if this is done? Would it make a vast difference if I use a different transistor? I prefer using 2N3904 because it is more cheaper. Can the LM386 be replaced by an ordinary class A transistor audio amplifier? Would it affect the audio signal? AI: Coil is easy to make - you only need 4 turns of wire! possibly not, 2N3904 is up to 100MHz, BF494 up to 120MHz yes, you can use another amplifier. It will affect the audio according to the specifications of the amplifier, eg total harmonic distortion (THD), bandwidth, power etc.
H: How is the output voltage of a mobile phone charger fixed to a rated value? Here is the circuit of cellphone charger. I have seen the input voltage range for a cellphone charger to be 100-230 V but the output voltage to be fixed to the rated value(like ~4.5 V). So, my question is how the output voltage is fixed to the optimized value if I provide different mains supply voltage. Like, in India I give 220 V RMS value while in Japan I provide 100 V RMS.The grid frequency is also different in different countries as well as the RMS voltge. I am attaching a picture of the rating of an ac adapter(cellphone charger).- AI: Those "wide range" chargers do not just have a transformer and a rectifier inside but some electronic curcuitry. They use so called switching regulators. So basically there is first a rectifier that rectifies the mains voltage so we have a 150V or 300V DC Voltage. Now we basically use a transistor to switch this voltage on and off trough a really small transformer (but at a high frequency). The switching regulator measures the output voltage and controls the power transistor that switches the high voltage. This way you can build a system that accepts a wide range of input voltages and still delivers a stable output. It's small, quite efficient and cheap.
H: DC blocking capacitor in opamp feedback loop - does this count as the "signal path"? I'm a bit new to electronics, so this may be a dumb question, though I hope not. I'm working on a pre-amp circuit which can drive several TDA7492 boards from a single source (I've found they don't enjoy all being connected directly to some sources) The preamp is a single supply design, with a gentle gain to boost the 1.25v line level from my Pi to "professional" +4dB level (These boards can take up to 3.6v peak-peak) to maximise the signal, followed by the amps themselves and a unity-gain stage to supply an isolated aux-out for my friend's poweramp. It works nicely in PartSim, though I haven't decided exactly on some of the component values for the gain: http://partsim.com/simulator/#38256 /Edit: Added a much simplified schematic: The coupling caps will be polyester, and I've arranged so that I can use low values, which are within budget. Higher value polys get expensive very quickly. I've had all the expected issues with version 1.0 of the amp where I very mistakenly used Tantalums. I understand now why these are very bad - inductance and ESR. My problem is with C1 in the circuit. What C1 does is block DC and thus sets the DC gain to unity, which keeps the DC offset at 1/2 supply whatever AC gain is applied. Without this, the gain pushes the offset up and can clip the waveform. I need the feedback resistance to be low overall, as I've read that this is good for low distortion - I assume the distortion would be due to thermal noise in the feedback circuit. This means I need a high value cap here so that the filtering effect of R1+R2 & C1 doesn't lower the gain in the audio range. The best I can achieve is the 220uF you see here which drops the output by 10% between 20 and 50Hz. I can live with this. But my problem is understanding whether C1 is in the signal path. I can see arguments both ways - the signal doesn't really flow through it, so probably not. But if I used an electrolytic here and it didn't pass the signal cleanly, would this in fact create distortion before the capacitor and pass the noise in to the inverting input, either affecting the gain or even amplifying the distortion? If I can use an electrolytic, that's great. If I can't, I've got a problem with a high enough value poly cap being extremely expensive (But perhaps I can compromise here - can anyone suggest a more appropriate, inexpensive capacitor type?) Thanks very much in advance! :-) -Oli AI: C1 does affect the signal path but your premise about keeping the feedback resistors low in value is largely unfounded. The op-amp has an equivalent input noise of 66nV per sqrt(Hz) and this may sound confusing but to help you out I'll explain. The bandwidth is audio hence 20kHz as near as damn it so take the sq root of 20,000 to get 141 and multiply this by the 66nV to get the relevant audio noise produced by the op-amp - I estimate 9.3 uV RMS. The thermal noise of a 3300 ohm resistor at 20 degC is ~1 uV RMS but you have, in effect two in parallel as far as AC is concerned so that's a noise of 0.73 uV. Use this calculator to try this your self. So you have about 0.73 "adding" to 9.3 and you have to add noises vectorially by pythagourous: - Noise = \$\sqrt{0.73^2 + 9.3^3}\$ = 9.33 uV i.e. SFA difference. So you could choose 33k resistors and these together have a parallel resistance of 16.5k and produce a thermal noise of 2.3 uV. When pythagorus has his way the net voltage noise is 9.58 uV or 0.26 dB more noise. If two 330ks are used the net noise is 7.3 uV and combining this with the op-amp gives a total voltage noise at the input to the op-amp of 11.82 uV. This is about 2.1 dB more noisier than the op-amp on its own. Hopefully you can see that you can choose much bigger resistors than what you thought and use maybe a 10uF ceramic. I'll also add that your original 220 uF capacitor and 3300 ohm resistor will give a 3dB point at about 0.22 Hz - this for audio is ridiculous and a capacitor that is ten times smaller at 22 uF is much more suited. This all leads me to say that 33k resistors and a 2.2 uF cap is all that is needed. The TL064 is a better choice regards input noise but it aint perfecto. For instance, the TL064 has a guaranteed input common mode range of +/-11V on a +/-15V rail. This means that on a +/-6V rail (12V and 0V is the same), the input signal level has to be biased at 6V and cannot reasonably be greater in amplitude than 4Vp-p. However, the main problem is the output amplitude - the DS says it is guaranteed to produce +/-10V p-p on a +/-15V rail and, transcribing this to a 12V single rail means you are only going to be seeing a guaranteed 2Vp-p. Of course these numbers are the extremes but, if you were building many of these devices you would choose a better op-amp.
H: What type of switch is only on when released I have a situation where I've got a few electronics boards stacked on top of each other. I don't want the boards to be powered when stacked but when they're no longer on top of one another I want them to turn on. I figure a push switch that is off while depressed but on when compressed makes sense to me, is there a specific name for this type of switch? AI: Top one is "normally closed" and the bottom one is "normally open" You want the top one it seems.
H: "Break-before-make" SPDT Relay out of 2 SPST Relays I require a SPDT relay (changeover) that will switch up to 600Vdc, which presents some problems because on the major suppliers such as Digikey, such a product is not readily available at such a high voltage. However, there are SPST relays available and I was thinking of using two of those. However, I am concerned because I need the contact to the first line to disengage before connecting to the second line (otherwise I might cause a short). To make things a little more difficult, I have only one digital control signal available, and would like the switch to connect to either one line or the other based on whether I am outputting a logic 0 or logic 1. Anyways, all this to ask: does anyone have any recommendations for introducing this switching delay via hardware components? I was looking around for a "break-before-make" design but wasn't finding much luck. Any help or direction would be appreciated! Edit: Based on the accepted answer here is the schematic I came up with, for reference. Thanks everyone!: simulate this circuit – Schematic created using CircuitLab AI: Study this circuit: - There is a single input called PWM but it can be any on-off circuit from any old logic device. It produces two outputs and notice the little bit of deadband due to the RC network. For your application you can invert the OR gate output to guarantee it never rises until the AND gate output has gone low for the delay incurred by the RC network AND, importantly the AND output will never go high until the (N)OR output has fallen to low for the same period. 1 input, two outputs with shoot-thru (aka deadband) protection. As an aside I'd probably use schmitt trigger input logic gates.
H: FPGA pin assignment for BGA fanout I'm working on a project that involves multiple (many) BGA packages, namely interface converters such as Ethernet PHYs connected to a central FPGA. I've already worked out a (tentative) layer stackup in terms of what signals will run over which layers, but I need a good strategy for assigning pins. Obviously, a PHY will fan-out in a pretty distinct order, but I'd like to have a decent idea of how to assign the pins in an informed manner so that way I'm not having to get shifty with vias to rearrange the trace order during the layout. To illustrate, the interface I'm working with is standard GMII, so a 16-bit bus with control signals. What are some good methodologies for assigning pins from an FPGA that correlate into the most logical fan-out order for traces leaving the FPGA? AI: If the design isn't fixed yet (i.e. pins are free to be changed), then you have some flexibility in what goes where. The first step is to determine what pins are absolutely fixed. For example the FPGA may only have certain resources on some pins - dedicated clock inputs spring to mind, as do high speed transceivers. Depending on the clock network some clock inputs can only feed some PLLs for example, so you need to decide early one where these clocks are going. The second step, decide where interfaces can go - you want parallel interfaces to use pins close to each other, the last thing you want to do is route some pins only to realise that one bit of an interface is on the opposite side of the FPGA! This step also involves looking at voltages. FPGAs tend to have different 'banks' which each have different IO voltage references. You can't(*) have an interface running at 2.5V sharing with one running at 1.8V. You also ideally want all pins of a parallel interface to be on the same, or adjacent banks, preferably ones in the same corner of the FPGA (there are four corners, Top-Left, Top-Right, Bottom-Left, Bottom-Right, and each tend to have resources dedicated to that corner). Once you have a map of where things can go, so you have decided your 16bit interface can use pins n-m or whatever, the exact order doesn't actually matter. You can rearrange in that group quite readily - because if picked correctly, the FPGA can simply remap the pins as needed as long as you were careful in your groupings. Third step is to decide what layers interfaces are going on - try to minimise the amount of times high speed signals need to change layers as vias are nasty at high frequency. Fourth step, pick any order in your grouping and start routing from the various devices towards the FPGA, preferably doing the most important ones first. Once you get to the FPGA, you can see what pins could do with being remapped - say you end up getting back to the FPGA with the data pins in an obscure order, simply reassign the pins to untangle them (easier to remap than to start adding vias to cross over each other). Once you have a good idea of the mapping - maybe you've routed a fair few of the traces and are now confident that your choice of pins is routable. The fifth step is do a test compile - compile the FPGA and make sure it can use that pin mapping! I've had it happen to me once before that I've chosen assignments and accidentally used a dedicated programming pin for one bit in a data bus - I sent off the design without test compiling only to find out when the board got back that the one pin can't be used. The test compile will ensure that your pin mapping is compatible with the FPGA. Note (*) Well you can sometimes, depends on the FPGA.
H: Calculate simple NPN transistor circuit Arduino 3.3V I am trying to control a 3v latching relay (single coil) using an Arduino @ 3.3v [SPST-NO PCB Mount Latching Relay, 16 A, 3V dc] rs-online.com/web/p/latching-relays/8276283/ The relay requires a 3v positive to latch one way, and inverse to latch the other way. I think this is the circuit I need to implement: However, I cant get the relay to switch because the voltages in the coil are not high enough (should be 3v and I am getting +-0.49V when using 3.3v supply). (using transistors: BC558 and also tried BC548) (relay coil Coil Resistance: 16.8ohm) I have done a simulation using livewire which shows pretty close what I am measuring in real live. I have been suggested to use an H-bridge, however I would like to know if having a 5V supply, I can make this circuit work and make the design simpler. Could I get some help on how to make this work and understand it? I have set variable resistors in the simulator and tried different values but cant get 3V. AI: Using just 2 transistors will work, but you'll waste a significant amount of current from the power supply. The reason you can't get enough V across the relay coil is that its resistance is much lower than the 47 ohm resistors -- they limit the current too much. If the relay really requires 3 V, and is 16 ohm, you'd need to make the 47 ohm something like 1.6 ohm or lower in order to allow 3 V across the resistor. Now, the 'ON' transistor will need to sink both the relay coil current and the supply across the 1.6 ohm -- 2 A ! That would require a base current of ~ 100 mA which an Arduino won't be able to supply... Using 5 V makes things easier, but not very practical yet -- you can use about 10 ohm, so the transistor currents become about 500 mA instead of 2 A Best solution is to use an H-bridge -- replace each 47R with a PNP transistors shown here. Your circuit (as you show) will also need clamp diodes -- else the back emf from the relay coil will cause large + spikes and damage the NPNs the 1st time you turn off the NPNs. You can use MOSFETs instead of the NPNs and PNPs -- for Q1 & Q2, then you don't need R1, R3. For Q3, Q4, you don't need R2 & R4. This circuit should work for 3 or 5 V. simulate this circuit – Schematic created using CircuitLab
H: Battery configuration for 5V I am building board with Atmega328p, in total energy usage is 40mA, now I need to focus on power suplay, on the board is standard V regulator to 5V, what is better baterry configuration, 4 AA which gives me 6V, or 6 AA which gives 9V ? I need optimize space so 4AA would be better, but 6A gives more energy, will the board with 6AA live 33% longer ? AI: The minute you have to use a linear or switching regulator you are going to waste power because you have to supply "juice" to the regulator but, there are quite a few buck-boost designs that I think will work: - As you can see this one takes an input voltage as low as 2.7V so it's useful for a single Li-on battery too. The data sheet says that in "selectable burst mode control" (more ripple voltage basically) it's quiescent current can be as low as 50 uA. This is what the efficiency of the circuit looks like for various input voltages: - It looks like it's going to be about 80% efficient across the input voltage range with ~40mA output current (whether burst or PWM is used). How does this compare with an LDO regulator running with a 6V input? 1V will be dropped by the regulator and, at 40mA consumption, the power loss is 40 mW and the total power is 240 mW. This is an efficiency of 83.3 % so it's not much different to the switcher. However, the LDO regulator's efficiency will rise to about 95% as the voltage drops to about 5.1 volts but this of course is shortlived because, below possibly 5V on the input, the output voltage will be significantly lower than 5V and maybe the target circuit will misoperate. I'd look for a buck-boost regulator that is less high power and I'm sure this would prove to be a great solution: - Generally, it has about 90% efficiency for input voltages from 3.6V to 5V so it looks more promising and, you can see that the power loss on very light loads is below about 50 uW. Given the benefits of being able to use the full range of the battery I'd go for this 2nd solution. It might also be worth considering the LTC3534 - it has a 25uA quiescent current draw (light loads) and is more suited to 4 AA cells specifically: - Don't forget to try TI too - they also have a pretty good range of switchers and are usually half the price of LT devices. I use LT because I like their search engines and they are reliable devices.
H: FPGA Frequency divider with linear regulation I'm currently developing an FPGA gateware requiring a regulated frequency divider and I wonder if there is any tricky way to get a linear output frequency regulation? What I mean is that if I use a simple preloaded counter I get a f~1/x relevance, where x is a counter initial value. As I need to regulate frequency in a relatively wide range (say 0.5Hz-10kHz) I'd like it to be a linear function of some argument (of course, not necessarily a counter value-it can be something more complex ;-) ). AI: At the moment, you are considering using a preloaded counter. This means that each clock cycle you get a fixed increment, in a variable accumulator length that you control, hence the frequency is reciprocal to your control value. If instead you use a controllable variable increment in a fixed accumulator length, your frequency will be linearly dependent on the control value. This is how a DDS works. A fixed length accumulator, typically a power of 2, for instance an 18 bit accumulator that counts up to 2^18 (250k-ish long) is incremented each clock cycle odf the system clock. In pigHDL, you would write int count [17 downto 0]; process(on sys_clock rising): _ count <= count+freq; _ output := count[17]; This may, or may not, give you what you need. The MSB of the counter will give you the output, but unless freq is an exact power of 2, the output cycles will not be exactly the same length, they will vary by one count. The average output frequency = freq * fs / accumulator length. You can get increased resolution for the frequency by increasing the length of accumulator and frequency word, to the right. There is no way to remove this one cycle jitter, if you are going to use the clock as a digital source. Bear in mind that for an FPGA, it is bad form to take the MSB output and use it as the clock line to other elements. Much better to turn it into a one cycle ClockEnable, and use that to condition the clock into downstream elements. If it's for an external source, you can take the top several bits of the accumulator into a DAC, filter the waveform, and use a comparator, this will reduce the jitter considerably.
H: Why this pair of IGBT's died silently? Before some days I have taken a look at circuit which controls AC current's frequency. That circuit was a VFD which looks like: simulate this circuit – Schematic created using CircuitLab But this circuit is for controlling 3 Phase motor. So, I tried to make a similar one that can control a single phase load, like a bulb. So, I created the below shown circuit: simulate this circuit And here is the sketch of arduino to give pulses to Gate of IGBTs: int Phase1TransistorA = 2; int Phase1TransistorB = 3; int Phase2TransistorA = 4; int Phase2TransistorB = 5; int t = 4; //time in seconds int T = 1000 / t; int p = 1028; //number of duty cycles of pwm to create half ac-cycle. void setup() { pinMode(Phase1TransistorA, OUTPUT); pinMode(Phase1TransistorB, OUTPUT); pinMode(Phase2TransistorA, OUTPUT); pinMode(Phase2TransistorB, OUTPUT); } void loop() { for (int i = p; i >= 1; i--) { digitalWrite(Phase1TransistorA, HIGH); digitalWrite(Phase2TransistorA, HIGH); delay(T / (p * i * 2)); digitalWrite(Phase1TransistorA, LOW); digitalWrite(Phase2TransistorA, LOW); delay((T/ (p * 2)) - (T / (p * i * 2))); } for (int i = 1; i >= p; i++) { digitalWrite(Phase1TransistorA, HIGH); digitalWrite(Phase2TransistorA, HIGH); delay(T / (p * i * 2)); digitalWrite(Phase1TransistorA, LOW); digitalWrite(Phase2TransistorA, LOW); delay((T/ (p * 2)) - (T / (p * i * 2))); } for (int i = p; i >= 1; i--) { digitalWrite(Phase1TransistorB, HIGH); digitalWrite(Phase2TransistorB, HIGH); delay(T / (p * i * 2)); digitalWrite(Phase1TransistorB, LOW); digitalWrite(Phase2TransistorB, LOW); delay((T/ (p * 2)) - (T / (p * i * 2))); } for (int i = 1; i >= p; i++) { digitalWrite(Phase1TransistorB, HIGH); digitalWrite(Phase2TransistorB, HIGH); delay(T / (p * i * 2)); digitalWrite(Phase1TransistorB, LOW); digitalWrite(Phase2TransistorB, LOW); delay((T/ (p * 2)) - (T / (p * i * 2))); } } When I tried to power up the bulb, the first pair of IGBTs died silently even without heating. I would like to know Why this happened and the steps to solve the problem. The pair of IGBTs that died: AI: Multiple problems. It sounds like you don't really understand HV electronics, and I suggest you stop before you get injured or worse. Your circuit is dangerous (to you, your Arduino and your computer). Your 1N4001's are only rated to 50 V -- you should use 1N4007's if you need diodes. You have nothing to limit inrush current when plugged in to the mains -- likely will blow the bridge rectifier also. But - you don't need diodes to drive IGBTs. You only need the anti-parallel diodes if you are driving inductive loads (or very long wires). The IGBTs won't turn off fast enough -- there is nothing to discharge the gate. You can't drive the high-side IGBTs like that. You need a level shifter. 3.3 V isn't enough to drive an IGBT. Most need 5 .. 10 V. You don't need an H-bridge to drive an incandescent light -- just a single low-side switch. That could be an IGBT, MOSFET (need rated > 450 V for US 110 V applications, > 700 V for 220 V mains supplies), or a triac (which is commonly used, but is best with a different type of controller). I RECOMMEND THAT YOU STOP DOING THIS UNTIL YOU UNDERSTAND THE HAZARDS INVOLVED.
H: JTAG connector - large pitch to smaller one I have a Segger's Jlink EDU probe which I used for debugging and programming various ARM based MCU's. But now I have to work with this LPC4088 QS board which has a smaller pitch 20-pin JTAG connector that usual one. So I am searching for something like this: to transform my large pitch JTAG cable to smaller pitch one. BUT The one on the picture (the smaller one) has only 10 pins while I need 20-pins. Does anyone know of any similar circuit like the one on the picture but with two 20-pin connectors (larger and smaller one)? This is the schematics of the JTAG interface on my board. It looks like more than 10 pin's are needed: AI: Maybe this one would work for you: Here is another possibility of a somewhat universal JTAG adapter: And finally this is likely exactly what you are looking for:
H: Time delay dpdt relay using only pot, relay, and cap (discreet components and no transistors) I am trying to design a simple time delayed relay circuit that uses only a potentiometer, 5V dpdt EM relay, a single sufficiently large capacitor, additional resistors (if needed), and buttons/switches, powered by a 9V battery. The schematic below works to simply turn on an LED while a button is pressed and keeps it on for a moment. I have only been able to extend the time by adding more caps in parallel. I believe there is a simple solution to this logic problem that has the relay triggering itself almost as an oscillator, but when setting up as an oscillator I still can only vary the time by adding caps. How can I arrange and wire the components mentioned to allow a time delay of a few seconds to minutes, dependent upon a single potentiometer? I thought a toaster's circuit would work, but most I see use a 555 timer or a pair of transistors and other components. Is there a simple way to use an RC circuit alone as a timing circuit without the solid state components? I know how to do this using transistors and ICs, but that is not the point. It is an experiment to show a possible use of capacitors and RC circuits without first teaching about solid state components, ICs, etc. The time delay can be delay ON or delay OFF. Thanks simulate this circuit – Schematic created using CircuitLab AI: I'd say forget it. There is a reason all those circuits you see use a 555. A practical capacitor does not old enough energy to power a normal relay for the period of time you are aiming at. You might get it to work with a very very low current relay, maybe a 230V relay, with an idem capacitor. But that is exactly the circuit you don't want a novice to play with. (At least, if you want him to live long enough to rise above that level.)
H: Connecting Sensors To A Production PCB Background: I designed a simple board a while ago using a ATMEGA328 and a ESP8266. The idea is that the ATMEGA deals with sensor data and the ESP8266 deal with the whole online bit. I did a initial run of 20 prototypes when a client of mine showed interest in the board and data I was collecting. This board consisted out of 1 PIR sensor, 1 Sound sensor, DHT11 for temp, MQ135 for air quality and also a RGB led which helps me debug without the need to connect to serial etc. Initially I had JST 2.54mm connectors on the board for connecting the sensors to the board. See this pack of connectors at DFRobot to get an idea of what I was using. The idea was for 20 units to crimp a header to one end stick it into the header on the PCB and then solder the three wires onto the sensors/led and viola all done. Sounded simple but soldering 20 PIR sensors, 20 LEDs, 20, MQ135 with three wires each led to a whole new level of respect for workers at FOXCON!!! On the plus side I now have a level 1 soldering badge. Here is a picture of a my test model I am working with to give you an idea of how it looks. Sorry my cell phone camera is obviously dying at the moment so pic not great quality. NOTE: Dupont connected to JST at the moment is FOR TESTING ONLY! So now the client wants 100 of these boards and I am having cramps in my hands just thinking about it. I was thinking of using just normal PCB header pins on the pcb and then using Dupont breadboard wires between the PCB and the PIR Sensor, LED etc. This will require a lot less soldering and since these devices are going to be mounted in air conditioned offices on a rather stable surface vibration will be kept to a minimum. Essentially I tested replacing the JST with simple headers and using Dupont connectors and it works pretty well. The Dupont female points grab those PIR pins pretty damned tight and I have been unable to shake it loose(trying to simulate vigorous usage of the sensor which wont really happen in normal use). Since I am limited in experience I need some guidance from the Jedi and Sith master on this site. Is using Dupont connectors even advisable for production usage? Alternatives to using either the JST or Dupont? I would rather solder prototypes than focussing on soldering production models. Would a PCB manufacturing house be able to help me with the soldering of the sensors? If not is there other assembly houses that could help? From my research it seems there is no real standard and using what works seems like a logical and viable alternative. Just trying to get some feedback from more experienced engineers and makers. AI: They might become loose over a couple of years. You should use a latching connector, but that really depends on your application and requirements. Check the usual suspects, 3M, Tyco, Molex, Hirose and tons of others. It is usually the work of the assembly house, either SMT or TH (Contract manufacturers), PCB manufacturers produce PCBs, not assembled products.
H: PN junction voltage drops in a Saturated BJT Let's say we've got our BJT operating in its saturation mode. Can we make (a rough) approximation now and make calculations for the circuit assuming that both Ube=0.7V and Ubc=0.7V (for an npn). Or do these values change in sat mode? AI: If a NPN BJT is operating in the saturation mode, I usually take the following rough assumptions: \$V_{be} = 0.7 \mbox{V}\$ \$V_{ce} = 0.2 \mbox{V}\$ This gives an slightly "assymetrical" \$V_{bc} = 0.5 \mbox{V}\$ because the transistor cannot become an "ideal" short.
H: Are ratings (volts and amps) inter-changable on components? Assume I have a component, it is a flip switch (like so), on the switch it states, 120V at 3A or 240V at 1.5A. Following this pattern, is it safe to assume the switch is capable of servicing in an environment with 5V at 72A? Also, when does 5V become unsafe to bare contact? AI: Absolutely not. The voltage rating is generally based on safety concerns and insulation properties. The current rating is based on the wire construction (thickness), and heat dissipation properties. For a switch, even the AC and DC properties might not be the same, as any arcing that occurs in an AC use is quenched 120 times/s, while in a DC situation it is not. Generally less than 40 V or so is reasonably safe for bare contact with one hand, but you would certainly feel 40 V if you touched it with wet skin (I wouldn't try it) -- you might even feel 9 V.
H: Please suggest me soldering station I want to buy soldering station. Can you suggest me some? I don't want to buy some very expensive station (like Weller), but I don't want also some cheap station. I will use it firstly for some "basic" soldering and then for SMD. What do you think about Hakko FX888? Here is one on ebay http://www.ebay.com/itm/220V-OEM-HAKKO-FX888-65W-Electronic-SMD-SMT-Soldering-Station-Iron-Phone-Repair-/311122279863?hash=item4870553db7:g:ehYAAOSw6EhUM5W3 Is this real price for original station? Thanks in advance AI: Be careful with these kinds of questions. They can be kind of opinionated and not really related to electronic design. In my opinion Hakko is the best brand for the job (assuming non-industrial applications). If you are very new to electronics and soldering I would highly recommend buying a cheap one (like $15-$20) just to get some experience with. The one that you have listed is my personal favorite. Be warned that in certain applications it will be most desirable to have a digital soldering station. You can most definitely get by with an adjustable analog if you do not have access to one, but depending on what level of work you are trying to accomplish you might need one. I am unsure of what you mean by "is this really the price." This particular seller is selling more than just the station including tweezers, solder, and different tips (which you will definitely want if working with SMD). If you just want the station it's about $80-$90, but I think they might just be selling the digital for around $90 now. Again I don't want to sound unkind, but if you are looking for product reviews go to another site. This site is reserved for electronic design only.
H: Does a carrier wave need to be sinusoidal? I am trying to AM modulate a song and I have an idea of how I am going to do it, but I need a carrier wave to start with and I've been having trouble creating one with a ceramic resonator because I don't have a CMOS inverter. So I thought I would go shopping for an IC chip that can give me a carrier wave of 500kHz and one for 700kHz. However, I don't know what I need to buy. I understand the theory behind a carrier wave, but because I plan to use a BJT for the mixer. I have the suspicion that I could use a digital wave from the oscillator on the collector and the signal to modulate on the base. Note that the diagram is meant to be basic to show my idea. simulate this circuit – Schematic created using CircuitLab Anyway, I don't know much about carrier waves besides building an AM radio receiver and I don't know anything about amplitude modulating a signal. Would my idea work, or am I going to waste my money if I buy a chip that oscillates only square waves? Any part or schematic recommendations? AI: Square-wave carrier systems are actually not uncommon in cable, fiber optic, or even line-of-sight optical communication systems, but it is important to note that these are closed-channel systems; effectively, all energy the transmitter produces for the most part can be assumed as being received by the receiver. The benefit here is that the transmitter and receiver also need not care about the other undesirable (harmonic) information that was generated. For RF transmission over the air, square waves generate a lot of ugly odd harmonic content that would violate the FCC's restrictions on bandwidth; that harmonic content would be a lower-power, lesser-quality version of your original modulated signal on a separate frequency and the obvious consequence is interference to other users. One (notable) exception to this rule-of-thumb would have to be Class D AM transmitters, which work by pulse-width modulating a square wave signal. These manage to work because they have a very stringently-designed output filter that strips out the harmonic and switch noise from the transmitter signal. Note that this can be tightly filtered-out, like the Class D example above, by means of an output filter on your modulator/transmitter; you would simply design a band-pass filter to pass your modulated bandwidth around the carrier frequency while giving every other signal frequency a high degree of attenuation. So, in short, the use of a square wave as a carrier does work, but for air RF transmission, it's likely that the increase in complexity of output filter design to ensure permissible spectral output trumps the use of a square-wave source. Have you considered using a LC circuit (i.e. Colpitts or Hartley oscillator) or a crystal oscillator circuit as your carrier feed source? These can be constructed quite cheaply with a BJT and inductors/capacitors and tend to generate good sinusoidal waves with varying degrees of stability. They also have the benefit of being quite well-characterized by the amateur radio community. EDIT, regarding OP's guidance on Colpitts oscillators The following is a Colpitts design I found from one of my EE texts, with many of the component values omitted to allow this to serve as generic of a schematic as possible: Here, \$R_1\$/\$R_2\$ serve to bias the NPN transistor, \$C_1\$ is the emitter bypass capacitor, \$L_1\$ is a RF choke designed to prohibit the RF generated from seeping back into your power supply (as in Sean's comment on the other answer), and \$C_2\$/\$L_2\$ is the Colpitts tank circuit that generates the oscillation. Note that I constructed it using a split-stator/dual-gang variable capacitor. This allows for this circuit to be variable-output. As I mentioned below, the resonant frequency of this Colpitts oscillator is given as follows: \$f_{res} = \frac{1}{2\pi \sqrt{L_2\cdot C_2}}\$ The effective capacitance of \$C_2\$ is the equivalent of each of its gangs in series, so: \$\frac{1}{C_2} = \frac{1}{C_a} + \frac{1}{C_b}\$ where \$C_a\$/\$C_b\$ are the capacitances of each of the gangs. Because most gangs are held for the most part in lock-step, this can improve to \$(\frac{1}{2C_g})^{-1}\$. Inaccuracies in the frequency desired versus frequency produced is likely due to parasitic capacitances in the circuit, and stray capacitance such as the inter-winding capacitance of the inductor. You can also add additional capacitance to the tank circuit to help corral or offset the frequency range from where it would otherwise be.
H: Auto snap to the selected component when press "M,S" in Altium In Altium schematic editor, select one component, and press "M", then "S" (move the selected object), when you press the mouse button now(anywhere on the sheet, even not on the component), the cursor will snap to the component automatically. But in PCB editor, the cursor won't snap automatically, is there any "settings" to make it behave the same in the schematic editor? Update: Let me explain in details. Actually, I'm using two displays, one show the schematic, another show the PCB, and I check the "Cross Select Mode" under the "Tools" menu. So, when I select one component on the schematic, the component with same designator will be selected on in the PCB editor, but the selected component may not be visible on current screen, such as if you don't zoom the board to fit to your screen. Now when I press "M,S" in the PCB editor, I want the mouse snap to the selected component without navigating to it manually. This will make the component placement more easy. AI: Sounds like your grid size is too small on the PCB. If you change your grid size on your PCB to something higher, you should be able to snap to whatever the grid size is. If you press G when in the PCB editor, and you change your grid to something large, like 100 mil, and zoom in so you can see the grid, you'll see that your components snap to the grid in 100 mil intervals. If you change your grid smaller, like 10 mil, then the snapping will happen for every 10 mil grids. Edit So after the comments explained more, I have a similar setup, where I either split screen between Schematic/PCB or have each on its own monitor. This is the way that I do it. Select component in Schematic (this highlights the component in PCB editor). Move cursor to PCB editor windows, and drag the window. This is to switch the context menu from Schematic editor to PCB while keeping the component selected. If you left click, you may lose the selection. Go to tools->component placement->reposition selected components (I have a keyboard macro that does this automatically, so its faster). This will now move the selected component no matter where it is. Additional, you can select multiple components in the schematic editor, and use the same method, and it will automatically switch to the next component after you have placed it. So if you are placing a micro + decoupling caps, you can do it all in more or less one go. It will also remember the selection order as well.
H: Comparator question for weak sine inputs I have the above circuit to convert mV level sines to pulses. It is a working design I just simulated it. When input sine signal VG1 amplitude is less than around 70mV comparator cannot convert to pulses. When the input sine amplitude is more than 70mV it can successfully convert to pulses. Below are simulation results for 60 and 80mV input cases: I have 2 questions: 1-)How can I lower the sensitivity. For example to be able to convert pulses even at 10mV without using another op-amp? 2-)What might be the reason for C1, C2 , R3, R4 and R5 and why configured that way? AI: Before I can address either question, you need to modify the circuit. Connect a 1k resistor from VF1 to +12. As it stands, the output has a swing from zero to about 6 volts, but will only do this while driving a high-impedance load such as CMOS. Now for question 2. R2 and R4 produce a nominal 6 volts, and C1 AC couples the input, so the signal now swings around that 6 volts at the - input to the comparator. R3 and R5 also produce a nominal 6 volts at the + input, and this forms the reference level for the comparator. R1 provides positive feedback, also called hysteresis, and ensures a clean transition from high to low and back again. In principle, this hysteresis is what's causing your problem, although it's only part of it. Let's say the - input is more negative than the reference input, and the output is high. No current is being pulled through R1, so the output is at 6 volts. Now drive the signal higher than 6 volts. The output will go low, and the + input will drop by $$\Delta V = 6 \times \frac{\frac{4.7k}{2}}{\frac{4.7k}{2} + 220} = .063\text{volts}$$ While that number might seem to explain your problem, it doesn't. Basically, you got lucky with your resistors. The two voltage dividers (R2/R4 and R3/R5) will only give exactly the same voltage if the resistances have exactly the same ratios, and you are probably using 1% resistors. If you take these tolerances into account, you could get another .063 volts of difference, which would limit your useful amplitude to 126 mV. Like I say, you got lucky. EDIT - Well, I take that back. You didn't get lucky, exactly, it's just that the simulator used identical resistor values, so it's the equivalent of using extremely high-precision resistors. So my calculation is only necessary if you actually build the circuit in real life. END EDIT. As for question 1, I'd recommend something like simulate this circuit – Schematic created using CircuitLab EDIT - Since a zener is acceptable where an op amp is not, R2 can be replaced with a 6-volt zener, R1 with a 300 ohm resistor, and OA1 and C3 eliminated, as shown below simulate this circuit END EDIT This references both inputs to the same voltage, and the hysteresis is reduced to +/- 3 mV. In theory, this will allow switching on a 6 mV (pk-pk) signal. However, this also ignores the fact that the comparator has an input offset voltage, which can be as great as 9 mV, so a 10 mV signal may not be possible. Also, with the reduced hysteresis noise on the signal may become a problem. You'll need to be careful in your construction and shielding, and do not forget to add a 0.1 uF ceramic cap to the power lead of each IC as shown. Install these as close to the ICs as possible, and a ground plane is a very good idea.
H: What is the meaning of fault_reg = ram [address] in verilog? input [3:0] address; reg [3:0] ram [7:0]; reg [3:0] data, fault_reg; fault_reg = ram[address]; data = fault_reg; If the value of ram is 1010 and the value of address is 100, what is the value of data[0], data[1], data[2] and data[3]? Does it mean data[0] mean bit No.0 = 0 of ram, data[1] mean bit No. 1 of ram = 1, data[2] mean bit No.2 = 0 and.....? AI: Here, address is four bits wide and ram is declared as reg [3:0] ram [7:0];. This declares a memory as nibble (4-bit) wide and such 8 nibbles. As shown in below figure. In order to fully address all the 8 memory addresses, only 3 address bits are sufficient. So, there is an unused bit in address variable. This will not give any simulation issues, but synthesis tool shall give warning about the unused bit of address. Also, declaring reg [3:0] data gives 0 as Least Significant bit (LSB) and 3 as Most Significant bit (MSB). So, for your question: What is the meaning of fault_reg = ram [address] in verilog? This assigns/copies value from memory ram at address of address to fault_reg. If the value of ram is 1010 and the value of address is 100, what is the value of data[0], data[1], data[2] and data[3]? In this example, ram[4] = 4'b1010 so, ram[4][0]=0,ram[4][1]=1,... and so on. Hence, data[0]=0, data[1]=1, data[2]=0, data[3]=1. Refer to Verilog Array Input question for unused address bit and Verilog Memory, Simple RAM Model links for further information.
H: FPC connector to DIP I have a board which is something like this: (available on eBay) I know I have to buy a socket, but how do I solder the connector onto the board? AI: You solder it with steady hands. Its just the solder holding it in place? Solder would be enough to hold the connector, if you are gentle with it and don't yank it around too much. After all, that's how the FPC connector on the other end of the cable is attached to its board. You can also add a drop of epoxy or hot melt glue to help secure the connector. Keep in mind that this board is only a kludging aid. In production boards, there are additional large SMT pads on each end of the connector to hold it down more securely.
H: High Temperature Conductor in Air I am trying to determine what the best material is for a high temperature conductor. It is for distributing power inside a furnace. I need to run at around 1000 celsius (the furnace is hotter but this is how hot the conductors might get). The lower the resistance, the better. Platinum is not an option for me. It seems most refractory metals (all besides platinum?) are problematic in the open air with oxidization at these temperatures - even tungsten will rapidly oxidize at 1000C and above. I can coat the wire to protect it, but have not been able to determine what could be reliably used. AI: The oxide should protect the material for a while- so if your conductors are thick they may last long enough. Nickel superalloys last fairly well, but they have high resistivity, which might be a problem in your application. If your maximum current is modest, I would suggest a superalloy-clad mineral insulated cable. For example, Hastelloy. Check the manufacturer's data for lifetime estimates. If you have high voltages, beware that many insulating materials become rather leaky at such temperatures.
H: Do processor commands and values have binary codes? I am working on a diy computer and I am struggling to understand how processors handle information. I would like my diy computer to be able to do the bare minimum, for example calculator, output text. But since I will only be using commands in binary, do I have to come up with my own syntax for command handling. Such as: Data (some binary digits), adress (adress in binary), how to handle the information (binary code for a command) I'm no expert in computer science, but as far as I understand, each 2 bits have a hexadecimal value and all bits have a decimal value. So if I program my diy cpu to think that say 1001 = add how will I be able to differentiate between that command and the decimal value? AI: I think best way to start think about it is logic gates that can be arranged to adders, flip-flops and basically everything you need. Then you just apply your binary command to command pins, binary data to data pins and take output value from output pins. So you can have 1001 on command pins and 1001 on data pins meaning different things, because they are applied in different locations. Also you can access commands and data on same pins, but at different time. If you know how to make logic ciruits you get the idea, when you make some specific logic circuit you clearly have pins that are responsible for command, separate pins for data input and maybe some other pins. Inside microcontroller this is probably much more sophisticated, since you can't make entirely new logic circuit every time. I am not sure how this is done precisely, but I would guess they have bunch of pre-made logic blocks that are routed(with transistors probably) depending on the code. If you start to think about memory and addresses then it starts to be more confusing, but essentially it can be broken down to logic gates level and maybe some analog stuff, if needed(like in ADCs or some tricky constructions). There is also FPGA, where you can basically re-build logic circuit every time you program it, though inside they really use LookUp Tables(LUTs), basically mapping logical functions to truth tables, but it is the same thing in the end. Different technologies do stuff different inside, e.g. FPGA is not at all like microcontroller. But if you will break them block by block they are all built of transistors doing stuff. The problem is - you would spend so much time breaking down the operation principle of them so you basically need to be VLSI designer to know everything about it.
H: Switch (Relatively) High Voltage from Logic Level The Situation: Basically, I would like to switch a load that requires 30v using a switch that cannot have more than 5v across it. More descriptively, I am trying to drive the anode of a VFD segment from the output of a shift register (probably a 74HC595). What I Have So Far: Because the segments are essentially common-cathode, the anode has to be switched. You can't really switch that with an N-Channel FET, as when current is flowing drain to source, the source would instantly be at ~30v, which being higher than the gate voltage (5v), would instantly turn off (I guess maybe it would actually reach equilibrium at some very low conductance?). If a P-Channel FET were used, the gate can be pulled up to the 30v rail for off, but then you have 30v at the logic output, which is far, far outside of spec. Short of shifting the logic-level ground up to 25v, which seems like a major pain, the best that I can think of is to add an N-Channel FET to drive the main P-Channel FET, and pull down the gate of that to ground, like this: (I showed it with just a simple switch and load rather than an IC output and a floating anode so that this will be hopefully more applicable and/or easier to understand for people with other applications...) The Question: The problem that I have with this is that it seems like an excessive number of components for such a simple task. There has to be a simpler way to switch the higher voltage, right? I feel really stupid having so much trouble with something so simple, but I've spent all day on it, and this (or shifting up the logic ground) is all that I've been able to think of. I guess in the grand scheme, it's only one extra transistor and one extra resistor, but it kind of adds up when you're driving a large number of lines... AI: You can indeed get rid of Q2, although a normal open-drain part will have trouble with high voltage on the drain as the output FET will break down -- normal HCMOS output FETs are not rated to drive 30V! A part like the TPIC6B595 will do the job, though -- that right there will save significantly on parts count. Once you have the level-shifting handled, you can then use a high-side switch array to take care of the rest of the work. (Oh, and the pullup on the output of the TPIC6B595 can be a resistor-pack, if that helps you any.) Or, you can simply use an integrated high-side driver/level shifter array.
H: Feedback resistance condition (Rf > 29R) in Phase-shift oscillators In an equal resistance , equal capacitance CR phase-shift oscillator, the feedback resistor has to be slightly larger than 29*R When I am attempting to simulate the circuit at 29*R, there's an oscillating transient response but it is not unstable, hence the circuit will reach a steady-state, however once I increase the feedback resistance slightly above 29R, the response is unstable and the phase-shift oscillator is functioning properly. why is that? shouldnt a feedback resistance of 29R be theoretically sufficient for an unstable response? AI: For each oscillator circuit the designed loop gain has to be slighly larger than the theoretical value of "1" (oscillation condition). This is because (a) in most cases, the calculation of the circuits assumes no parts tolerances as well as IDEAL opamp properties (input/output impedances, infinite gain) and (b) a safe start of oscillations must be ensured with a pole pair slightly into the right half of the s-plane (loop gain >1). As a consequence, oscillation will safely start and amplitudes will rise continuously until an amplitude limitation will take place (supply voltage limitation or other non-linear parts like diodes) - thereby reducing the loop gain to a mean value of unity.
H: Is it illegal to transmit in the FM range for personal use I am building an FM transmitter. It is going to be designed to transmit only for a short range of around 50 to 100 meters. My question is whether it is illegal to transmit signals in the FM band over this short range without owning a license? If yes, what should I do to test it? P.S. I live in India. AI: Different locations will have different rules. If you live in India, check the regulations for India. Generally, powerful transmissions in licensed bands like FM audio must be licensed, to prevent interference with other users. If other users complain, you may get tracked down, and your equipment confiscated or destroyed. Most places have unlicensed bands as well, like the so called ISM frequencies, and usually a few other bands for other purposes as well, like garage door openers and short range personal comms. If your transmitter is very low power, and operated at times and at frequencies when it could not interfere with any stations your neighbours in range are tuned to, then you can probably figure out what the actual chances of causing trouble or getting caught are.
H: How to verify the reliability of torque rating of DC motors given on online sites? I have to decide the motors to be used for my robotics project, but while searching for the DC motors I have found out a rather unusual fact that I can find several motors who provide the same torque while there is a significant difference in weights. I am curious to know why? For eg. I can find 12 kg cm torque motor weighing just 125 gm whereas I can also find 17.5 kg cm torque motor weighing 600 gm. I really don't understand where does the difference lie? And is it reliable to use the lower torque motors. AI: It's all about efficiency. Basically if you put more winding copper into the motor it will be able to produce a given amount of torque more efficiently than one with a small amount of windings. However more windings will also make the motor much larger physically. Which of these is important to you really depends on the application. For example, an air conditioner motor will have more windings because efficiency is more important than weight, while a model aircraft motor will have less windings and trade efficiency for weight. In terms of reliability the main issue is heating in the motor. The smaller motor will simply get a lot hotter and also have less mass to absorb the heat. If you don't deal with this heat properly then it could fail. The other aspect is the motor RPM. If you can run a motor faster it is generally more efficient. However at some point the motor will either fly apart, or core losses will start to dominate, reducing efficiency again. In a well designed motor these points will all converge at the max operating speed (hopefully with a bit of safety margin on the flying apart RPM). In terms of assessing this information you'll need to look at the rpm/voltage or torque/current constant and the winding resistance. Using these two numbers you can work out the efficiency of your system and decide what sort of efficiency/size tradeoff is appropriate for your application.
H: Is there any science (or trick) to determining a replacement op amp? I have a "mature" circuit design that calls for an LF411 op amp. I've built it up with a through-hole PDIP and it works just fine. Now I'm designing an SMD PCB for the circuit so I have the opportunity to review the device selection and perhaps choose a more modern component for some of the BOM items. This seems like the sort of situation a practicing EE must encounter from time to time. Is there any science or perhaps a small bag of tricks for selecting a replacement device when its predecessor has perhaps become long in the tooth or even obsolete? In this particular case, I initially (somewhat hastily) used a TL071 in the prototype because I had one on hand. It turned out to have substantially more input voltage offset and in this (DC lab power supply) circuit didn't allow the output voltage to be adjusted all the way to zero. I got an actual LF411 a few days later and that fixed things up nicely. So I know I need reasonable precision, as far as the voltage offset is concerned, and also I noted it's a JFET input, so the input current is quite small. I could redo the analysis of the circuit and simulate, etc. to essentially "re-spec" the part. (Or I could just stick with an LF411 in a SOIC.) But I wanted to take advantage of the review opportunity and wondered how a practicing analog circuit designer would approach the problem. Any pointers for a well-trained but practically inexperienced newb? AI: Start with power supply requirements - does the prospective replacement work on the power rails supplied? Then, on a similar theme, determine the amount of ripple or noise on those power lines to make sure the replacement has power supply rejection figures suitable (or maybe better than the current op-amp). Look at data sheet graphs for this. They should tell you PSRR figures across a wide range of frequencies. If the prospective op-amp doesn't have graphs don't use it. Input common mode range and expectations of how large an output signal it has to produce might come next. Clearly, choosing an op-amp that has better input and output range is fine but, in reality this may limit choices so, an analysis of the circuit and studying the expectations of what might be asked of the op-amp are both important. Gain bandwidth product is important to consider - does the prospective op-amp have enough to fulfil the gain and bandwidth expectations of the design? Has it got sufficient slew rate to deal with full range output signals at the highest frequency required? Again, you can just choose a better device but this (and all the other parameters) might limit your choices so, it's always a good idea to realize what the expectations of the target design are. I'm not going to go into detail any further other than to make a list and all are generally important in one way or another: - Input offset voltage (produces DC output voltage error that is "gain" x input offset) Input offset voltage drift with temperature (can't be nulled out) Input offset/bias currents and their drift with temperature (only of concern when resistors around inputs and feedback are medium to high value) Low frequency noise figures and equivalent input noise densities for voltage and current Open loop DC gain (basically produces a DC error - important when buffering voltage references that are meant to be extremely accurate for instance Phase margin and Gain margin - produces ringing and possibly oscillation Common mode rejection ratio Gain stability in low gain configurations - see also phase/gain margins Package options Input capacitance (e.g. can produce filter errors and can create substantial HF noise) Output current drive capabilities Short circuit current (could be too high and destroy itself) Output loading capabilities (including capacitance and overshoot) Power supply current and power supply decoupling requirements Settling time can be important now and then Environmental temperature range Differential input voltage limitations (if used as a comparator) Dynamic output impedance in closed loop/open loop - poor op-amps in this respect might not make good active filters especially when not operated at unity gain. Input signal inversion problems (quite a few op-amps will invert the input signal when inputs are too large in amplitude) Maximum power dissipation TotalHarmonicDistortion versus frequency (important for audio applications for instance) Potential ambient light sensitivity issues (it can be a problem with many op-amps and in the IR region too) Positive and negative overload recovery times Hopefully if all the above are considered you should be OK.
H: A NOT gate with gradual changes between state I have a simple NOT gate circuit with the following arrangement. LED D1 will light up by default. If I push SW1, the transistor Q1 will activates and D1 will be turned off. This NOT behaviour works just fine. The problem is, I want D1 to turn off gradually (the gradation has to be noticeable by humans, so 1 or 2 seconds would probably enough). I put the capacitor C1 for that very reason. I thought by having C1 in that position, the activation of Q1 could be made gradual, because initially currents would be consumed by C1's charging. It doesn't work. D1 still have an abrupt change of state from ON to OFF. I wonder if I did something wrong with the circuit design? AI: You have not given C1 a means to discharge well below the point where it turns on Q1. Therefore, it will quickly reach that point when a little current is dumped onto it thru R2. Even if C1 was kept at 0 V before SW1 is closed, you still don't have enough of a time constant, and the transition will be fairly abrupt. (10 kΩ)(100 µF) = 1 s. However, this will trip when the base voltage gets to about 700 mV out of 6 V, which is about .12 time constants or 120 ms. Here is a crude circuit that has a better topology for what you are trying to do. It's not a great way to do this, but I tried to use about the same parts. By using the transistor as a emitter follower, the current thru the LED will vary less abruptly. However, this still doesn't take the logarithmic response of the human visual system into account. C1 and R1 are the timing components to turn the LED on, and C1 and R3 for off. This means you can adjust R1 and R3 separately for different on and off responses. R2 is sized assuming D1 is a typical green LED with about 2.1 V drop when on, and a current rating of 20 mA.
H: Does "POLY-MET" unambiguously describe a capacitor type? I have a BOM where one of the capacitors is described as .33UF 250V 10% POLY-MET. I did a search on the term "poly-met" and found a Cornell Dubilier device with that particular designation. It's a metallized polypropylene capacitor, which makes sense of the nickname. But I'm wondering, couldn't it also mean a metallized polyethylene or poly-other-ene construction as well? Is it a conventional nickname applied to in particular to metallized polypropylene or perhaps the -ene that's used doesn't matter that much for most applications? (In this case its used for high-frequency bypass on a transformer secondary before a bridge rectifier.) AI: My first guess for what "poly-met" means was "metalized polypropylene". However, using that abbreviation when its important this be understood clearly without additional information is irresponsible engineering. Any competent BOM will show a part number, which is the real definition of the part. The description is to give you a quick idea of what part to expect. This could be a manufacturer's part number, a distributor stock number, or a in-house partnumber. Preferably, more than one of these is filled in. Either way, look these up to get the definitive answer.
H: Why do multimeters have an aluminium foil at the back? I was wondering what is the purpose of aluminium foil at the back of some multimeters? I did a continuity test and found that it is connected to the ground (COM) terminal. My initial guess was that it was some kind of shield but since it wasn't fully enclosed and multimeters don't work in that high RF range it didn't seem plausible. Also, it wasn't some kind of ground plane as it is only connected to the board at one point via the spring. What is it exactly? AI: It's a shield, but to keep RF out, not in. The multimeter does not use high frequencies internally, but it does have high-impedance internal nodes that could be affected by external sources of RF.
H: EAGLE change name of wire label in all similar label I've some schematics with, for example, a power wire with name VCC. I need to change this name to USB_VCC on all wires which have by name VCC. Since I've a lot of wires with names to change, there is a function in EAGLE that allow me do somethig like: "Change all wire VCC to USB_VCC"? It'll save me much time and ensure to change all wires and to not do errors in rename. AI: You rename nets like anything else, using the Name tool. If you click on any section of a multi-section net, you get a slightly different window to if it is a single section: Notice how there is a pair of radio buttons. If you want to rename just one section/segment, select this Segment, otherwise to rename the entire net, select every Segment of this Sheet. If the net spans multiple sheets, that second option will instead read all Segments on all Sheets. This is fine for nets which you have named, i.e. ones that don't have a sup type pin on it, such as the GND symbols, VCC, V+ etc. symbols. In the case which you net is named using a symbol from the supply1 or supply2 libraries, you should not use the name tool on this net. Why? because when you copy the supply symbol and place it on a new net segment, that segment will take the original name rather than the new one. So how do you get USB_VCC? Well, the simplest answer is to open the supply1 or supply2 (or any other) library and create a new supply symbol. In the library, draw out your symbol (or copy and paste from an existing one). The image below shows two key settings for the pin in that symbol - these settings govern the name of the net when the supply pin is connected. Then create a device in the library with the name of your symbol (e.g. USB_VCC), and use your new supply symbol. You don't need a package, just the symbol. You can then save the library and insert your new supply symbol. You will have to replace the existing symbol on all segments of the net - e.g. using the replace tool. If you already have the board routed, you may want to close the board file, replace all segments with the new symbol (to prevent wires being disconnected in the board), then save. Open the board back up - there will be forward/backward annotation errors now. Simply rename the net in the board file (it will rename the whole net), and this should bring the boards correctly back in sync. If they don't come back into sync, it actually nicely tells you that you missed a segment of the net when you were replacing symbols, and you can tell from the error which segment you missed (if any) and can then go correct it.
H: Sine to pulse converter circuit help I have a sine to pulse converter box(LM2903 is used), it works very well and I also have the circuit schematics. The converter converts the mV level sinusoidal signals to pulses. I want to rebuild it so first I drew it in LTSPICE but it doesn't work. It outputs constant negative voltage not pulse for given sinusoidal input signals. What is wrong? AI: Oh, I know why it's not working without even trying it. The 2903 has an open-collector output and you've tied the load to ground. It's possible you have some other error besides this though.
H: Good book references for physics and math for electronics Next year I'll join an engineer school in Dijon, France in the field of Electronics (Embedded Systems). I am currently taking a 2 years general IT education and my main problem is that I am very bad in math and I have no real basis in physics. So I'm wondering if you can help me with some book titles to get me started. I used to be good in math but I made the mistake to take one year of law (I know...) so I forgot everything. I really need a book that explains concepts very thoroughly and that features exercises & their correction. If you can think of books other than math and physics that could help me, feel free to propose. Thanks in advance for your help. AI: I would strongly recommend "The Art of Electronics" to you. Very gentle on the math and very thorough. https://en.m.wikipedia.org/wiki/The_Art_of_Electronics Math is your friend in electronics but maybe you'll begin to enjoy it. That is the difference between being good and not
H: What is the equivalent resistance between Nodes A and B in this infinite resistor network? Please provide analytical solutions or instructions and not simulation results. Thanks. simulate this circuit – Schematic created using CircuitLab AI: The horizontal resistors have no current in them because the circuit is symmetrical when mirrored in the X-axis. They can be ignored. Starting at the centre, you have some network X with 2 R's in series and then an R and a 2R in parallel. R//2R is 2/3R. So at the n+1st iteration, you have (// means parallel) Rn1 = (2R+Rn)//(2R/3) = (2R + Rn)*(2R/3) / (8R/3 + Rn) In the limit, Rn1 = Rn == RX (if the sequence converges, which it must), so you get Rx = (2R + Rx)(2R/3) / (8R/3 + Rx) 8RRx/3 + Rx^2 = 4R^2/3 + 2R*RX/3 8RRx + 3Rx^2 = 4R^2 + 2RRx 3Rx^2 + 6R*Rx - 2R^2 = 0 This has a negative root, and a positive one at +0.291*R So the total resistance is 0.291*R You can check if the recursion holds: Start with 0.291, add 2 in series = 2.291, then in parallel with that there is 1, and 2: 1/(Rn+1) = 1/2.291 + 1/1 + 1/2 = 1/0.291 ==> Correct.
H: Data encoding for GFSK wireless We are designing a RF link (2GFSK) using the CC1125, to replace and older system. The older system uses NRZI data encoding with bit stuffing. Bit stuffing is used to remove the present of stop flag (0x7E) data and to avoid having too many 1's in a sequence. An older RF engineer told me that bit stuffing where important, not only for synchronization, but also because the receiver adjust to the center of the two frequencies, to acheive optimum reception. It is therefor sensitive to a long sequence of equal bits, as it would track away from center and towards the frequency f0 or f1, depending on the sequence. But, in the case of CC1125, or any orher RF chip, how do I calculate the effect on receiver sensitivity (BER) due to a long sequence of equal bits. Example: If we send 24-bit preamble, with 32 bytes payload, how do I calculate the maximum number of allowed equal bits, that would cause some precent error. ? AI: Here's a non-mathematical approach to the problem using FSK as the example. My intent is to show that several factors contribute to being able to calculate an answer (which I don't intend to do). Think about a simple data slicer sat after a simple FM demodulator: - Next, imagine the receiver isn't locked into any valid transmission but after a little while along comes a TX preamble: - Before the TX preamble, the data slicer is just receiving random noise from the demodulator and it's trying to make sense of that random noise because it's not a very clever circuit. The blue line is the data slicer trying to track a potential FSK signal and if the demodulator has a bandwidth of several MHz the blue line can be sat several MHz away from where it should be when a proper transmission eventually comes along. OK so far? So, along comes the TX preamble and that TX preamble has to be long enough to drag the data slicer's filter (blue line) from one extreme of the demodulators output, to the precise centre frequency of the transmission. That's it's whole purpose in life. Are you able to see that in the diagram? The data slicer above uses a simple RC low pass filter that has a 3dB point at a much lower frequency than the maximum data rate. It has to be like this or, when a bunch of zeros or ones come along in the transmission, the filter will migrate towards one side of the data and eventually there will be corruption. So there are several factors: - How wide might be the demodulators frequency range compared to how tight the bandwidth is of the transmission? How long is the preamble in order to align the data recovery circuit with the transmission centre frequency? What type of filters (1st order, 2nd order etc.) are used to align the data recovery? How much noise is there - in other words how far from the precise centre of the FSK bitstream can the data recovery circuit's estimation be before noise corrupts? How clever is the data recovery system at adapting its filters (once locked onto a preamble) so that drift away from the precise centre frequency (due to continuous runs of zeros or ones) is slowed down. This can make a massive difference of course - intelligence in this area is fundamental to reducing preamble length whist living with extensive runs of no data transition. This was a simple example of FSK.
H: Why are the three component LEDs in an RGB LED so unbalanced? I was recently spec-ing some RGB LEDs for a project, when I noticed that the millicandela ratings on the three colors are rarely close to the same number. (i.e. 710mcd Red, 1250mcd Green, 240mcd Blue). Does this cancel out somehow, or does this mean that the LED will always look yellowish? Also, why do manufacturers make such unbalanced LEDs? Wouldn't it make more sense to pair up 3 LEDs of approximately the same brightness? Example: CLY6D-FKC-CK1N1D1BB7D3D3 made by Cree AI: Sounds about right. To get white (6500K) using NTSC (colour TV) phosphors, the relative intensities are G=0.59, R=0.3, B=0.11 - most of the energy is in the green, least in the blue. (slightly differently rounded numbers in Wikipedia ) At equal intensity, blue would appear brightest. The actual numbers will differ here (LEDs not phosphors) but the relative intensities are actually more similar than I expected. Spehro's interesting comment goes some way to explaining why. The Candela is a definition of luminous intensity that is weighted such that 100mcd of red, green, or blue light are perceived as equally bright. Now as I understand the colour space conversion process - it doesn't follow from that, that mixing equal perceived intensities of R, G, B will result in what we see as white! Indeed how can it? Our eyes are most sensitive to green. So the actual intensity of the green light is reduced in the definition of the Candela to give the same perceived intensity as red, blue (Nitpick : I believe the other intensities are increased instead). Then, to mix the three and make white, we need to increase the perceived intensity of green light to restore the correct intensity in the mixed light. (That is why the measured intensity must be greatest at the wavelength where our eyes are most sensitive. That wouldn't make sense otherwise!) In other words, 100mcd each of red, green and blue contains much less actual energy on the green channel, whereas true white light would contain approximately equal energy in each channel - hence the definition of "white noise" in electronics. EDIT : An interesting article places the quantum efficiencies of red and blue LEDs in the 70-80% region, well above that of (previous to 2008) green LEDs (it's a sales pitch, after all!). This makes it likely that, whatever the reason for the low intensity of the blue LEDs, it isn't that they are difficult to make. So the relative intensities of the three LEDs in the question is the manufacturer's attempt to undo this weighting and match the LEDs so that the light generated is approximately white at rated current. Illustration (image source) To my eyes at least, in the illustration above, G is by far the brightest primary, with R second and B darkest, yet when mixed, they produce a pretty good white.
H: Is there a latch / flip-flop with this behaviour? I've looked at the most popular flip-flop types, and none of them seem to have this desired behaviour: It would have two inputs: A set signal, S, and a data signal, D. If the set signal is true, it would save whatever is in the data input. However, if the set signal is false, nothing would change. Let Q be the current saved bit. This would be the truth table: S D Q(next) 0 0 Q 0 1 Q 1 0 0 1 1 1 I've managed to reproduce this behaviour using a JK-flip-flop, two AND-gates and one OR-gate. Wouldn't this be particularly useful in computers? If so, why is there no such flip-flop (I may be wrong here)? AI: If this device has a clock, it's a D Flip-Flop with Enable. If there is no clock, it's known as a "Gated" D-Latch. (https://en.wikipedia.org/wiki/Flip-flop_(electronics)#Gated_D_latch)
H: Behaviour of capacitor discharging through a transistor's base I have the following circuit which attempts to turn on and off LED in a gradual manner (instead of having abrupt change between states). So when I press S1, LED1 will light up gradually, thanks to C1's charging. My eye can observe a 1 second period till the LED fully light up. This works as expected. However, If I release S1 afterwards, LED1 will gradually turn off, but it take very long, like 30 seconds or so till the light completely vanishes. So C1 discharges really slow, and I don't understand this behaviour. I thought C1's discharging process should be fairly fast, since there is only a 220ohm resistance (plus the diodes' resistance) between its two poles? Can someone explain to me what's happening here? AI: Your analysis is flawed. If there was a 220ohm discharge process this would also be present when charging BUT clearly this isn't the case by your own acceptance of what is going on. You said: - This works as expected. When you open circuit the switch there is a very weak (by comparison) discharge of C1 by the 220 ohm multiplied by the hFE of the transistor so, 220 ohm becomes more like 22k or much greater (transistor dependent). If you want it to discharge at approximately the same rate try using a 2 position switch where one position connects to a 10k pull-up resistor (as per your circuit) and the other position connects to a 10k pull down resistor.
H: How to bias BJT using Widlar current source? Here is Widlar Current source driving load: It gives almost constant current for wide range of load resistances: Here is one circuit involving BJT: You can see it is biased using one voltage and one current source. My question is, how I can use Widlar current source to bias this circuit? I mean, how should I connect it? AI: Connect the collector of Q2 from the 1st circuit to the emitter of the NPN in the 2nd.
H: Confusion about solar cells and MPPT Pardon the potentially silly question. I have been reading about photovoltaic (PV) cells and max power point tracking (MPPT). From what I understand, a MPPT module can use a DC-DC converter to change the voltage on the PV's terminals to its corresponding MPP voltage (Vm). My question is, how are the PV cells terminal voltage clamped to Vm? For example, the PV terminal voltage is 20 V and Vm is 17 V. Can we create a circuit like this to achieve MPP? PV 20V->DC-DC Converter 17V -> Other stuff This means that the PV terminal voltage is still at 20 V. Or, do we need to accomplish this? PV 17V -> Other stuff Where a DC-DC converter forces the terminal PV voltage to be 17 V. If this is the case, how is this actually done? I feel like I'm missing something really obvious. AI: A DC-DC converter under MPPT control must have a suitable load on it in which to dump all the power the panels can supply, for instance charging a big enough battery bank, or an inverter into the area's mains power supply. It then simply increases the current it draws from the panels, which drops the panel voltage, until the algorithm decides that it is drawing the maximum power it can. Drawing any more current at this point would reduce the voltage so much that the power drops. The MPPT controller doesn't have magic in it to predict what the voltage will drop will be. It will dither the current either side of the operating point, and see what the conditions are, and work near the MPPT. That way it can keep up with changing illumination conditions.
H: Running Nokia 5110 with 3 pins I recently received a Nokia 5110 LCD screen and have hooked it up to my Uno and got it displaying stuff. I'd like to try using it in a project with an ATtiny85 and found some resources to that effect, however, I'd like to be able to run it from fewer pins than the 5 I've started off with. I found a few things online which suggested I can tie the RST to VCC and the CE pin to ground. I connected RST to VCC and that seems to work ok. Connecting CE to ground doesn't work properly - it either displays half inverted with a bit of garbage or more usually just displays nothing at all. From looking at the datasheet, CE needs to see a falling edge in order to enable. I reconnected it to a pin and modified my code so that it simply writes it HIGH and then LOW in the setup() function and leaves it alone after that and the screen seems to work happily after that. Feels like a waste of a pin though! Is there any way I can simulate the falling edge without it needing to be connected to a pin? AI: What you want to do (one falling edge on SCE, keep it low ever after) is allowed according to the PCF8544 datasheet, but it has a lurking problem: you no longer use the SCE edges to indicate the byte boundaries, so after one clock missed (for example due to noise), you will be forever out of sync with the LCD. An IMO better approach would be to use a SPI extender (a simple HC595 will do) to supply all the signals in your system that need to change only rarely (for the LCD: res). This approach takes the same number of pins as driving all LCD pins directly, but gives you 7 'slow pins'.
H: Storing the charge from a MOSFET Bridge Rectifier I am using the following MOSFET Bridge to harvest AC bursts, that come from an electromagnetic transducer. In front of this circuit is a supercapacitor I have added. I read the following on a webpage that talks about this design: One caveat of the FET bridge circuit: do not use it as the rectifier in front of a capacitor-input power supply! In a conventional rectifier bridge, the diodes prevent the backflow of current from the power supply input capacitor as the applied voltage drops below the voltage on the capacitor. With this design, the MOSFETs act like switches rather than one-way valves for current flow. They don’t care which way current flows, hence the input capacitor of the power supply will be discharged to near zero volts with each half-cycle of the applied AC power! This limits the power supply applications for this circuit to inductive- or resistive-input designs. I would like know a successful method I can use, to store the charge coming from the electromagnetic harvester into a supercapacitor, using this FET-based rectifier. This temporary charge in the capacitor is used by an energy harvesting IC, to store into a LiPo battery. The bottom line is that I want to overcome this problem of losing charge in the capacitor in the reverse direction. AI: You could use a dual schottky on the AC terminals and take the + DC out from the CT of the dual diode.This means that the P chan fets are not needed.The p chan fets have higher RDs on and are more expensive .I have implemented something similiar on a 12V 8A battery charger that had 6 3300microfarad electros across the unreg DC bus that was nominaly 19V. What been stated so far will be reliable for your job . If your input volts are very low you may have to sense DS volts and turn off the gate when current tries to run back .A jargon term for this is a "FIODE",In other words you want the fet to be a low drop diode .I have done this on a mains bridge using some small BJTs hung around the fets.
H: What is the best way to measure voltages higher than the ADC internal reference? Let's say an ADC has an internal voltage reference and I want to measure voltages greater than that reference. I can use a voltage divider on the input(s) to limit my max input signal to said Vref. Now let's assume the ADC allows for an external voltage reference. I can use an external reference that matches my max input signal. Hardware is not my thing, but I see a couple trade-offs right away: 1) The voltage divider(s) are significantly cheaper 2) If there are more than a few inputs, the external Vref requires less overall components Ignoring those (if you want), which is the better approach? Adding info for The Photon: 12 bit ADC. 0-5V input signal. AI: ADC external references have a defined allowed voltage range which rarely exceeds much the supply voltage of the ADC, therefore this is not going to help you. External voltage references are useful because they are much more accurate, less sensitive to power supply variations, and drift less with temperature. When your signals are smaller than the internal reference, using an external reference of smaller value helps by scaling. But when they are higher, you are stuck with voltage dividers (as far as I am aware/can remember). Don't forget that ADC's have a finite input impedance, and therefore not only your voltage divider will need to be a negligible load for whichever signal you are measuring, but also the ADC's input impedance should be a negligible load for the voltage divider. For example, if your ADC has a minimum input impedance of 100 ohms (e.g. for the Arduino Due), the impedance of the voltage divider seen by the ADC must be at most 10Ohm to get 10% accuracy, which is certainly too high a load for your signal (unless it's provided by a power rail). This means either you will need to add a buffer (e.g. voltage follower opamp) between your signal and the voltage divider (for the above values), or between the voltage divider and the ADC (for higher resistances), or both as illustrated in the following schematic. simulate this circuit – Schematic created using CircuitLab Don't forget to select which tolerance you want on the resistors. 5% resistors will set the accuracy on the gain to 10%, 1% to 2% etc. Note: depending on your accuracy requirement, you may need to study the impact of input bias currents, offset currents, offset voltage, gain error, noise etc.
H: Maximum output current OpAmp simulate this circuit – Schematic created using CircuitLab This is the circuit schematic of an astable multivibrator. I have to chose the resistance value of R3 in order to limit the output current of the OpAmp. The OpAmp model is TL081 Texas Instruments, and this is the link to its datasheet . I am not able to find any information about the maximum output current in it. Can anyone help me? I have another question: is there any other reason why the resistor R3 is necessary in this circuit ? AI: In the data sheet there are hints about what you can load the op-amp with. The maximum peak-to-peak output voltage when loaded with 10kohm is guaranteed to be +/-12V on a +/-15V supply regime. When loaded with 2kohm the max voltage is somewhat less at +/-10V. I would usually consider that 2kohm loading is the minimum you should have. All that info is on page 6 of the DS. I would also look at these graphs for what to expect at lower suply voltages: - These are on page 9 and page 10 gives you more information when the ambient temperature is different. See also this graph on page 10: - I am not able to find any information about the maximum output current in it. The last graph should give you the information you need and you should probably consider 1kohm for R3
H: DRAM timing with row and column decoders Consider a 64Kx1 DRAM memory which means the number of rows is 256 and the number of columns is 256. In other words, two 8x256 decoders are needed for selecting the right row and column. Since, each memory location is 1-bit wide and we usually read 8 bits. Does that mean with a single row number, the column number must changed 8 times in order to read 8 bits? I have seen timing diagrams for row and column strobes. Are these valid for one bit? That has a high timing overhead for multiple bits then. Isn't that? AI: Not usually. To read 8 bits, normal practice is to read one bit from each of 8 separate DRAMs. However, if you were forced by cost or power considerations to use a single device, DRAMs of that era provided both burst mode and page mode, which allow you to provide the column number of the first bit you need, then automatically access adjacent bits in succeeding cycles - in page mode, up to all 256 bits in the currently open row. (64k DRAMs are well over 25 years old! Where exactly is this question being dug up from - is this an archaeology question?) The details of page and burst modes differ, and when L1/L2 caches became universal, burst modes evolved to address entire cache lines, wrapping the column address round by modular arithmetic rather than strictly ascending. Page mode also allowed convenient shortcuts to designers of video cards of that timeframe (25 to maybe 15 years ago). However, in newer DRAM designs, (the first DDR generation) page mode has quietly been dropped, leaving burst modes so you may have to address every eighth column individually at precisely the right time if you need larger groups of adjacent locations. This makes life more difficult if you're using DRAM without a cache/CPU combination, for example in FPGA applications.
H: Perfboad question I am still confused about working with perf board and solder. but I want to ask, this question: When adding 2 electronic element on a perf board, in series, the start point of element 2 should be in same line of the end point of element 1, horizontally, or in the same vertical line? AI: The answer really depends on the type of perfboard you are using - I assume that the perfboard you are using is the type known as stripboard, with metal connecting each hole to easily connect components, like this If it is like this, simply put the "end" of "element 1" and the "start" of "element 2" on the same row(where the metal connects them). However, you could be using perfboard which does not have metal bridging each hole, like this In this case, just put the "end" of one component and the "start" of another next to eachother, and bridge the two with solder.
H: How does the internal resistance of oscilloscope compare to that of the DMM? Both are the instruments that measure voltage, so what are the difference between multimeter and oscilloscope in terms of their internal resistance ? AI: Oscilloscopes tend to have standard input resistances such as 1M in order to be compatible with as many probes as possible that allow a 10:1 attenuation (little switch on the probe). DMMs usually have as high an input resistance as possible, given their overvoltage protection, which manufacturers usually set around 10M (but can be much higher for high-end devices).
H: From DTFT to DFT, why do we take N samples in frequency domain In this post: Why does frequency equal k/K where k=0,…, K-1 in Discrete Fourier Transform? And I have read several books, the authors take N samples with DTFT to get DFT. Why N? I think N is simply the sampling rate in time domain, right. In frequency domain, can we choose other sampling rate to get DFT? Thank you very much indeed. AI: As mentioned in the comments, \$N\$ typically represents the number of time-domain samples to be transformed. In frequency domain, can we choose other sampling rate to get DFT? Think about this in terms of information. I'll talk about systems where the time-domain signal is real-valued. Then in the time domain, you have \$N\$ real numbers. That's all the information you have about your signal. In the frequency domain, with \$N\$ samples you end up with \$N-1\$ complex numbers and 1 real number (because the zero-frequency bin always has a real value). So some of this information is redundant, and in fact you will find there are DFT techniques that calculate only the positive frequency bins and discard the negative frequency bins to avoid storing redundant information. If you chose to represent your frequency domain signal with \$M\$ values, with \$M < \frac{N}{2}\$, you'd be throwing away information. Your frequency domain signal wouldn't contain all the information in the time domain signal. If you chose to represent your frequency domain signal with \$M\$ independent values, with \$M > \frac{N}{2}\$, you'd be generating and storing redundant information. Your frequency domain signal would contain more information than the time domain signal, so some of this information would have to be created by some arbitrary choice, and it wouldn't tell you anything new about your actual signal. Note, there are some signal-processing techniques that do effectively use \$M > \frac{N}{2}\$. For example, if you "zero-pad" your time-domain signal before taking the DFT, you will obtain a smoothed frequency domain signal with extra (non-informative) bins between the usual bins, giving the appearance of greater frequency resolution. This can be useful for visualizing a complicated spectrum, but it doesn't actually produce any new information about the signal.
H: What type of connector it is? Can anyone identify this connector type? It sticks out of my wall near television area. I suspect it may be some kind of antenna connection, but I'm not sure. Could not find it in any of the antenna connectors listings. AI: From the look of the wires, I'd say that they are a speaker cable, and the gold pin is just to make it easier to insert the wire ends into binding posts or the wire clips that many speakers have. Are there similar wires (with or without the gold pin) elsewhere in the room (or elsewhere in the house)?
H: SiGe/GaAs RF MMIC as photodiode preamp Could you suggest if it's possible to effectively use SiGe/GaAs RF MMIC (which have 50 Ohm input impedance) as first stage of photodiode amplifier? The idea is to reverse bias diode from some +50V, and connect it directly to MMIC input, so that photodiode is loaded to MMIC's internal 50 Ohm termination. 50 Ohm might be quite low value though. I am mostly worried about noise figure, frequency range is 1-100Mhz (maybe even 1-4Ghz???). Final target is to detect individual photons (with some probability) or dozens of photons (photo-diode QE is 30-70%). I understand that typically one would need transimpedance amplifier for this job, but using common off-the-shelf SiGe/GaAs part in my dreams might yield lower noise... PS. I understand that silicon photomultipliers / APD might be more useful here, but they are prohibitively expensive. PPS: Parts I look at are : Mini-circuits MAR-6 2GHz 3dB NF Mini-circuits PSA4-5043+ 4GHz <1dB NF AI: I'm afraid the simple answer is no. Even assuming 100% QE, one photon produces one electron. What sort of current does this imply? Well, one ampere is one coulomb per second, and $$k_e = 1.6 \times 10^{-19} C$$ Applying Ohm's Law is not straightforward here, since detecting a single photon is clearly a short-term event. But let's say that, since the amplifier bandwidth is 100 MHz, the arrival of an electron can be considered a current pulse with a width of 100 nsec. Then $$V = iR = \frac{\Delta Q}{\Delta t}\times R = \frac{1.6 \times 10^{-19}}{10^{-8}}\times 50 = 8\times 10^{-10} \text{ volts}$$ While this is obviously pretty small, you need to compare it to the amplifier noise, and assuming the amp is Johnson noise limited with an effective temperature of 300 C, $$V_n = \sqrt{4k_{B}RT\times BW} = \sqrt{4\times1.37\times10^{-23}\times300\times50\times10^{8}} = \sqrt{82.2\times 10^{-12}} = 9\times 10^{-6}\text{ volts}$$ In this calculation, the expected signal is about 10,000 times less than the rms noise voltage, and detection would be, shall we say, challenging. This ignores shot noise, which makes things worse, but this seems perfectly reasonable in view of the already-unfavorable situation.
H: Is capacitor reactance [sometimes] defined with negative sign? Wikipedia currently claims so but I've looked in 6 books via Google Books and it's not defined like that, i.e. it's just $$ X_c = \frac{1}{\omega C} = \frac{1}{2\pi f C} $$ Is Wikipedia full of nonsense on this, is that just a fringe def, or somehow all six books I've checked via GB just happen to contradict that and some EE bible actually defines it with a minus sign like that? Wikipedia cites one book and one unverifiable website; I can't access that book right now. The ones I've checked: 1 2 3 4 5 6. Note that depending on your Google luck you may not be able to see all of these. And I've checked the 3rd ed. of the Art of Electronics by H&H; it also gives it the positive way (on p. 42). I was actually able to verify a newer edition of the textbook cited in Wikipedia, and indeed it defines it that way with a negative sign. So I'm guessing it's one of those egg-end issues. Still I'm curious if there are any EE standards (IEC etc.) that take a stance on this. Perhaps someone knows... I've accepted Adam's answer as good enough (and I've fixed Wikipedia too), however if someone knows more about IEC, IEEE or whatever other standard bodies might have said about this, please contribute... And from the Wikiality department, that article has changed quite a few times it seems; back in March it gave the positive definition. AI: The impedance of a capacitor is given by the formula: $$Z_C = \frac 1 {j \omega C} = \frac 1 {j 2 \pi f C}$$ where \$j = \sqrt{-1}\$. It takes a bit of algebra to get the negative sign: $$\frac 1 j = \frac j j \cdot \frac 1 j = \frac j {j^2} = \frac j {-1} = -j$$ $$Z_C = \frac 1 j \cdot \frac 1 {\omega C} = \frac {-j} {\omega C}$$ The reactance is the imaginary part of the impedance, so you could say that it's: $$X_C = Im\{Z_C\} = -\frac {1} {\omega C}$$ If you want to combine series inductors and capacitors into a single equivalent reactance, the sign matters. But what the \$-j\$ really represents is a -90 degree phase shift between the capacitor's voltage and current (current leads voltage): (source) If you want to talk about the magnitude and phase shift effects of the reactance separately, then you can drop the negative sign: $$Z_C = \frac 1 {\omega C} \angle -90^\circ$$ $$X_C = |Z_C| = \frac 1 {\omega C}$$ I wouldn't say either of them is wrong. They're different ways of simplifying to avoid complex numbers. Any simplification will be right at some times and wrong at other times. You need complex numbers to get the full picture, but that's a lot of math for a college freshman or the general public. So introductory books often deal with magnitude and phase effects separately. Your citations are good examples of this. The first book gives the positive reactance but then tells you to combine inductance and capacitance like this: $$\text{Resultant reactance} = X_L - X_C = 2 \pi f L - \frac 1 {2 \pi f C}$$ The second book gives the positive formula and describes phase shifts in the next paragraph. The third book (Electronics for Dummies) is a deliberate simplification. The fourth book describes the phase shift in terms of phasor diagrams on the next page. The fifth book mentions phase shifts in the box below the definition, but says that the book omits inductors entirely. The sixth book describes phase shifts on the page after the definition.
H: Method of generating a random binary number using gates I was wondering how a computer or any other device can generate random numbers. Is there a way to do this using only digital parts(non-programmable parts such as gates)? When I think of it, it seems like there has to be an "analog side" to generate random numbers. How can one make a multi-bit random number? AI: See this article Uncertain Circuits: When transistor 1 and transistor 2 are switched on, a coupled pair of inverters force Node A and Node B into the same state [left]. When the clock pulse rises [yellow, right], these transistors are turned off. Initially the output of both inverters falls into an indeterminate state, but random thermal noise within the inverters soon jostles one node into the logical 1 state and the other goes to logical 0. Also see the white paper (Respawned Fluff note: This is for an older Intel method, using two free-running oscillators, not the one described above.)
H: Op Amp Integrator Circuit - Why is there a linear current and voltage for a short time before exponential values? Consider the following integrator circuit: Why doesn't the current through the capacitor and voltage across it behave exponentially? There is a straight section in the beginning, up to around 2ms - why is that? Current through capacitor: Voltage across capacitor: Edit: It seems like somehow the output voltage is limiting the capacitor voltage. I can't seem to understand why though... AI: Opamp tries to keep both of its inputs at the same voltage. Since the positive input is fixed at ground, the inverting input must be at 0V too. The inputs of an opamp are very high impedance, so we can assume that no current can flow in or out of them. The only way the current can flow is into the output of the opamp. Since V1 is connected to the inverting input trough a 1K resistor, the current trough that 1K resistor has to be 5mA to result in the inverting input voltage at 0V, as such, the opamp output becomes a 5mA current sink, charging the capacitor with the constant current. When you connect a capacitor to a constant current source, the voltage is linear with time. So this explains the linear portion of the graph. After that, the opamp can no longer provide output negative enough (the supply rails are probably +-15V) to sustain the constant current, the current decreases and the voltage graph flattens out. Ideally, the capacitor would charge linearly forever.
H: 3 watt LED plus LiFePO4 battery (1 cell), no resistor, no regulator. Is there anything wrong with this? I read that you should "always" have a resistor in series between an LED and its power source. But what if the power source already has its voltage set to a suitable max for that particular LED? Why would you need a resistor then? So here's what I want to do: Power a 3 watt LED with a single cell LiFePO4 battery. The only other things in the circuit would be a switch and maybe a fuse (mostly to protect the battery). The LED charts I'm looking at show the LED comfortable and productive in the voltage range between 3.3 and 3.5 volts. This is perfect for a LiFePO4 battery with 1 cell. My charger is a Turnigy Accucell 6, which can be set to always charge the battery to 3.5 volts, or any other voltage I tell it to that is reasonable for LiFePo4. I can get a good chunk of the total energy capacity of the battery just running it from 3.5 down to 3.3 volts, which is the range where the LED works ok, without over-current or over-voltage (according to the mfr charts for the LED). For a straw-horse LED, we could look at this one, which has nice charts available here: http://www.rapidonline.com/pdf/170197_da_en_01.pdf In this pdf, page 15 has the chart for Vf vs If, with 1 amp being the max If allowed, this happening at 3.5 volts. Will this work ok - as long as I am careful to never let the battery voltage go higher than what produces max allowable current in the LED? AI: This will work only if you are able to maintain the junction temperature at 25C. Normally, this is not the case and the forward voltage of the LED will drop as it warms up. This will lead to a increase in the current, the current increase will accelerate the rise of the temperature and this "snow ball effect" may destroy your LEDs. Check page 7 of the document you added as reference, it is written that the thermal coefficient is around -2 to -4mV/degC. Another interesting aspect you have to watch is the range of the forward voltage, page 7 as well, it goes from 3.19V up to 3.99V. That means, you have to find one chip with exactly the 3.5V. If the chip you receive is in one of the extremes, you may have a overcurrent or no current at all according to your proposal.
H: Low level sine to pulse converter with sharp rising-edges I have two circuit simulations: Circuit1 and Circuit2 which are shown in above schematics. For the sake of simplicity I drew them in the same schematics. They share the same 12V voltage source and the same signal input VG1. VF1 is the output for the Circuit1, VF2 is the output for the Circuit2 and VF3 outputs the input signal VG1. Zener diode in Circuit1 is a 6.8V zener. Both circuits use LM2903 comparator. Circuit2 is the one I had before and it could only convert sine inputs more than 60mV peak to peak. With a suggestion from my previous question I simulated Circuit1 and this can convert sines around 6mV now. That was great but when I plotted the results I found out the comparator's pulse outputs are not sharp in mV level inputs. Here is an output for 6mV sine: As you can see VF2 is hopless. VF1 can handle 6mV sine but the pulses are not sharp. Another thing I observed for 80mV sine input is that, Circuit2 output is not in phase with input signal but Circuit1 output is. But Circuit2 on the other hand produces sharp pulses unlike Circuit1. Here is the plot: And finally for 1V sine input, both pulses are quite sharp and they are both in phase with the input signal. And the plot becomes as follows: It seems like for minimum phase differences and minimum sine inputs better to use Circuit1. But then for mV level sine inputs I'm not happy with the sharpness of the output pulses. My questions are follows: 1-)What would be a quick fix for Circuit1's sharp-pulse problem without using another op-amp? 2-)If another op-amp is definitely needed, would it be that a Schmitt trigger should be added to the output of Circuit1? a suggested circuit(waiting to be edited): AI: I would not bother trying a "quick fix". I would instead try the following circuit simulate this circuit – Schematic created using CircuitLab Basically, it requires the addition of 1 op amp and two diodes. I've shown a TL081 as the op amp, and this will work in simulation down to 1 mV or less, but it eventually runs into input offset voltage problems. If you can find an op amp with much less input offset, you can do even better. For a real circuit, get an op amp which includes an offset adjust pin, and use this to produce an op amp output of zero with zero input. You'll also note that the comparator output is inverted with respect to the input, but that is easily remedied in logic. EDITED TO ADD - And in the name of all that's holy, get rid of the 1N2804. It's a 50-watt zener, and is operating far below its recommended current level. A 1N4735, as shown, has a 6.2 volt rating and appropriate current levels.
H: Can one tri-state pin drive two SS pins? This is how i see it; State H: SS out is High, Deselecting SS 2, turning off Q2 and on Q1, which grounds SS 1, turning it on. State L: SS out is Low, turning off Q1 and grounds SS 2 through Q2. SS 1 is deselected by R1 State Z: SS out is high impedance. R1&3 keep the SSs high. ARGH EDIT: R4 keeps Q1 off. R6 may keep Q2 off. IDK simulate this circuit – Schematic created using CircuitLab Am I wrong? It's purpose is for when gpio pins are a premium. It's a 3V3 system. AI: My concern is that the low level produced by the PNP will be just under \$0.7\mathrm{V}\$ because of the required \$V_{be}\$ for the transistor to be on. For \$5\mathrm{V}\$ CMOS, the low level input voltage is usually at most \$0.8\mathrm{V}\$ (for \$3.3\mathrm{V}\$ CMOS it will be a little lower), which means you are right on the edge. Personally I wouldn't be happy to run that close to the edge. In fact, now that I have had a go simulating the circuit, I'm not convinced it will work as planned when floating. Q2 won't turn off properly unless you add a pull up resistor - but in order to be strong enough to work, it would cause Q1 to turn on. There is a way I have done this in the past, but it requires two comparators. This isn't too bad as a dual-comparator package is only 8 pins and would take up about the same as your discreet transistors. Basically the approach is to turn the input into ternary - you have equal pull-up and pull-down resistors so that when floating the input voltage will be about half the power rail. You then have a comparator for each output. For the first device the comparator is set up so that it outputs a low only when the voltage is less than one third of the power supply. For the second device you output a low when the voltage is above two-thirds of the power supply. It will require 5 resistors and 1 dual-comparator. The circuit is as follows: The above can be simulated here. It's simulated for \$5\mathrm{V}\$, but the circuit would be identical for \$3.3\mathrm{V}\$. Essentially the top comparator will be low only when the input is driven high. The bottom comparator will be low only when the input is driven low. If the input floats, both comparator outputs will be high. This is almost a Ternary to Binary converter circuit, it's not strictly speaking as you need the outputs 01,10,11, whereas ternary would be 00,10(or 01),11, but its essentially the same thing just with one bit inverted (hence the comparator is the other way up). If the comparators are open-drain, which many are, you will also need a pull-up resistor on each output. This shouldn't cause a problem as you will get a nice strong logic 0.
H: Guitar Pedal not functioning at all, no sound created Ok, so I'll do my best to explain my situation. (FYI, I am a beginner) I'm building an overdrive pedal for a school project which I plan on modding with the help from a teacher. However, I thought I did all the steps correctly but when I plug it in, it just kills all the sound (no sound comes from the amp). Is it possibly a problem with my wiring or my circuit? I'm attempting to follow the instructions for the pedal from this website: http://www.instructables.com/id/Overdrive-Pedal/?ALLSTEPS And here are some pictures of my project: All help is appreciated and if someone thinks that they can help me and needs more pictures and details, please contact me. Thanks to you all! AI: From your updated picture of the back of your PCB it seems as if no component is connected to any other component. This will obviously not make a circuit. Your problem is that you are not using the same type of PCB as they do in the guide you are following. You are using what is often called a perfboard where all the pads are isolated. They are using a very special type of stripboard (which I can not link to because the RadioShack website is seriously broken), but here is a picture of a similar type: Note the difference between this and the type you have chosen. If you want to read more about the different types, you can find information on Wikipedia about stripboards Your board is not completely ruined. Assuming you have wired everything correctly, you can add the missing connections by soldering short pieces of metal in exactly the same place as the RadioShack board would connect them. Hopefully you saved all the pins that you cut off from the resistors - they will come in handy now!
H: Digital Circuits Analysis Does anyone understand where 2.5v , 10 v, and 5V ( highlighted in yellow) are coming from? I know that Vcc=5V, and the two resistors R1 and R2 are in parallel so Req=0.5Kohm, and the voltage divided by 2 which makes it 2.5V . How did they find IE=2.59 mA ? Also, if you can explain the obtained KVL too, it would be great. I can see it but still cannot wrap my head around it.Thanks AI: First of all , when Q4 is ON and Q3 is OFF (so it is a open circuit ideally) , we can use thevenin equivalent at the emitter node of Q1 ,which gives you VTh= 2.5V and Rth =0.5k. under the assumption that R1 is connected to 5V Now applying KVL at the input loop from 5V input to the ground node,we have 5 - (10K) Ib - Vbe - Ie (0.5K) -2.5 = 0 Rearranging, 1.8V = (10K) Ib + (0.5k) Ie but Ie =(1 + beta) Ib; So, 1.8V = [ (10k /51) + 0.5k ] Ie which gives Ie = 1.8/0.696k =2.585 amps. edit: and regarding 10v 2.5v and 5v confusion, think of it like this, we have got resistive divider with supply voltage 5V at R1 which is in series with R2 giving us 2.5 volts at emitter node. The 10v used in the circuit seems vague as the supply itself is 5V,so thats wrong i guess.
H: How do I determine the courtyard for a component footprint? I'm using KiCad to lay out PCBs for some projects and often I find I need to develop a new footprint or tweak one that's in the library. I'm able to determine most things I need by consulting the datasheet and occasional other sources, but I haven't found anything about how to determine the dimensions and positioning of the courtyard. I understand the courtyard is the minimal "reserved" space for the component, such that no two courtyards should overlap. However, I don't see it stated on any of the datasheets so far. All of my projects so far will be hand-assembled (by me :), so I won't be needing it for pick-and-place machine purposes. However, being an engineer I don't feel comfortable not getting it right while I'm in there editing something. Also I will be submitting any worthy new footprints for inclusion in the official KiCad library, so I want to make sure I get it right. How does one determine the courtyard for a PCB part footprint? AI: The best answer is probably to follow IPC-7351 3.1.5.4. There are some tables that give guidelines as to the excess to leave around a component or pad, whichever is bigger- maybe 0.1 to 0.5mm in most (not all) cases. The standard specifically states that the courtyard is the starting point for the minimum area for the component, and there may be a manufacturing allowance that is dependent on factors that are not part of the standard. I've linked the working draft standard (2008) above, direct from the ipc.org website. The actual standard costs money so any copies you might find floating around probably represent copyright infringement.
H: IP Core Generator fails with Error I'm working on a project using a Spartan-6. I created a FIFO with the IP Core Generator (New Source -> IP Core -> FIFO -> Generate). The LOG looks like this The IP Catalog has been reloaded. Qt: Untested Windows version 6.2 detected! INFO:sim:172 - Generating IP... Resolving generics for 'FIFONineBit'... Applying external generics to 'FIFONineBit'... Delivering associated files for 'FIFONineBit'... Delivering EJava files for 'FIFONineBit'... Generating implementation netlist for 'FIFONineBit'... INFO:sim - Pre-processing HDL files for 'FIFONineBit'... WARNING:sim - BlackBox generator run option '-ifmt' found multiple times. Only the first occurence is considered. Running synthesis for 'FIFONineBit' Running ngcbuild... Writing VHO instantiation template for 'FIFONineBit'... Writing VHDL behavioral simulation model for 'FIFONineBit'... WARNING:sim - Overwriting existing file C:/Users/Christian/Dropbox/workspace/masterarbeit/VHDL/vhdl_work_in_progress/ DifferentialAttack/ipcore_dir/tmp/_cg/FIFONineBit/doc/fifo_generator_v9_3_vin fo.html with file from view xilinx_documentation Delivered 3 files into directory C:/Users/Christian/Dropbox/workspace/masterarbeit/VHDL/vhdl_work_in_progress/Dif ferentialAttack/ipcore_dir/tmp/_cg/FIFONineBit Delivered 1 file into directory C:/Users/Christian/Dropbox/workspace/masterarbeit/VHDL/vhdl_work_in_progress/Dif ferentialAttack/ipcore_dir/tmp/_cg/FIFONineBit Generating ASY schematic symbol... INFO:sim:949 - Finished generation of ASY schematic symbol. Generating SYM schematic symbol for 'FIFONineBit'... Generating metadata file... Generating ISE project file for 'FIFONineBit'... ERROR:sim - TclTasksC:project_006: Exception detected while executing [project set]. ERROR:sim - Failed executing Tcl generator. ERROR:sim - Failed to generate 'FIFONineBit'. Failed executing Tcl generator. Wrote CGP file for project 'FIFONineBit'. Core Generator create command failed. I already created the project new all over again but it didn't help. Can someone help here? Update: If I create a FIFO with the IP Core Generator in standalone mode (Thanks to Paebbels) it fails while Translating with an error: ERROR:NgdBuild:604 - logical block 'GEN_FIFO[13].FIFONineBit_I' with type 'FIFONineBit' could not be resolved. A pin name misspelling can cause this, a missing edif or ngc file, case mismatch between the block name and the edif or ngc file name, or the misspelling of a type name. Symbol 'FIFONineBit' is not supported in target 'spartan6'. UPDATE2: I did solve this on my own. I added the ipcore_dir directory within the translate process properties to the Macro Search path (-sd switch). That worked out! Thanks for your help! AI: You have to add the ipcore directory to your search path ! I had a problem like you : HERE
H: Ethernet switch controller chip Recently I bought an ethernet switch controller based on the Realtek RTL8305NB chip. It was bought as a proof of concept for my robot. Unfortunately, the board is too big and I want to design my own 2-layer PCB (also to learn new stuff). I would like to use the same chip, but finding its datasheet is almost impossible (I even contacted manufacturer and few shops - no result). Does anybody know a similar chip? Why is the datasheet for RTL8305NB so hard to find? What I need: 10/100 MBPS, at least 3 ports, package for 2-layer board (not BGA). I am new to the ethernet technology and I will appreciate any help and piece of advice. AI: I have successfully used the Micrel KSZ8999. It may be a bit overkill, but there are numerous options Datasheet here Other sources are Microchip and Vitesse to name but two, although when I last used Vitesse, they were very NDA conscious. Your switch is layer 2 managed, which is available in the Micrel part. I have not looked farther than this, as it provides a solution.
H: unfamiliar gain equation of differential amplifier I am examining op amps and I found this pdf very helpful: http://www.aicdesign.org/SCNOTES/2010notes/Lect2UP230_(100327).pdf In this tutorial, the op amp circuit topology is given as this: and gains of stages of the op amp are given like this: First time I see an equation like this. I thought the (DC)gain of the first stage is: \$gm_{2} * r_{ds2} // r_{ds4}\$ it seems really wrong to me. Can anyone explain why the gain is represented that way in this tutorial? AI: gds = 1/rds => conductance add up when you have parallel resistances. G = 1/R R1 || R2 = 1 / (G1 +G2) Thus, rds2 || rds4 = 1/ [(gds2 + gds4) ] So, GAIN for the first stage; Av = gm2 * (rds2 || rs4) = gm2 / (gds2 + gds4) here since m1 and m2 are differential pairs ,its ok to assume that their trans conductances are almost same (gm1 =gm2). Part2: Regarding the expression in terms of current , The drain resistance (rd) is due to channel length modulation and is given by : rds = 1/ (lambda * Id) or gds = ( lambda * Id). So replacing gds, we get; Av= gm2 / [lambda*(Id2 + Id4) ]
H: Are MOSFETs better for high power applications than the BJT? I was reading that the MOSFET is more optimal for high power situations at low to medium supply voltages than the BJT. However, I'm questioning this, because MOSFETs and BJTs waste little to no power while off.Mosfets are more prone to damage due static electricity breaking down the gate source oxide layer. Is it true that BJTs are better at high junction temperatures because they dont have an oxide layer? . Can anyone confirm whether or not this is true? AI: As usual, rules of thumb and knee-jerk answers can be misleading, especially when forgetting the constraints in the original guidelines that were then dumbed down to make "rules" for the non-thinking. FETs are voltage-controlled, and BJTs are current-controlled. This alone leads to a whole set of tradeoffs between the two devices, having nothing to do with operating voltage, current, or power. Both devices are capable of handling about the same power. Power dissipation is mostly a function of the package, and both devices are available in similar packages. The advantages for voltage versus current control are not as simple and one-sided as others here would have you believe. The voltage control of FETs requires essentially no power to keep them in a particular state, but that ignores both the control circuitry and that changing the state often is necessary in many applications. A FET gate looks mostly like a capacitor to the driving circuit, so it takes current to change the voltage. That together with a typical 12 V gate swing over the full on/off range can lead to significant current and power. For example, let's say the total effective gate charge is 50 nC, and the FET is switched at 100 kHz (every 10 µs). That comes out to 5 mA at 12 V, or 60 mW. That's the same total control power into the device as a BJT with 80 mA drive at 750 mV. There are other concerns beyond these for driving FETs and BJTs, but I'm trying to point out that it's nowhere near as simple as "FETs take no power to drive". In linear applications, the more predictable B-E voltage of a BJT can be advantageous over the D-S voltage of FETs. Constructs like emitter-follower generally have better characteristics than FET source-followers. Since BJT are both current in and out devices, they can be cascaded in ways that don't apply to FETs, like darlington pairs, or combined NPN-PNP devices. Another advantage of BJTs is the much lower voltage required to control them. You can control high current and high voltage BJTs with typical logic level voltages (3.3-5 V), which isn't possible with FETs. Of course the voltage control and even the larger voltage control range of FETs can be advantages too. I'm not trying to make it sound like BJTs are better, just trying to point out some ways they can be more advantageous since the knee jerkers here seem to have decided FETs are "better" in broad classes of applications. FETs and BJTs are fundamentally different, so there are going to be various applications where one provides advantages over the other. High current switching with low to medium voltage is one example where FETs are often used despite the generally more complex drive circuitry. This is because power FETs look like a low resistance when on, which can be 10s to single mΩ depending on how much money you are willing to spend. BJTs on the other hand look like a fixed voltage of maybe 200 mV to several times that, depending on how hard they are being pushed. At 10 A, for example, a 20 mΩ FET will have 200 mV drop, whereas a BJT will probably drop 2 to 3 times that. FETs can also be more easily paralleled in high power applications because their on resistance goes up with temperature, unlike the BJT saturation voltage, which goes down with temperature. For both BJTs and FETs, other characteristics become less desirable as the maximum voltage goes up. However, this happens more slowly with BJTs, so that above a few 100 volts, BJTs start looking like a good deal for power switching. In fact, this has given rise to the IGBT, which is FET and BJT working together. The FET is used to turn on the BJT, so doesn't need to handle as much current. The BJT then does the heavy lifting of switching the current and dissipating the power. Again, different devices will have different tradeoffs, and devices as complex as transistors don't fall neatly into simple categories that lend themselves to rules of thumb. There is no substitute for actually understand what it going on, then weighing the tradeoffs for your particular application carefully to decide what parts to use.
H: Accelerometer on doors and windows We have made a mistake. We created this pcb with accelerometer , a very good one (ADXL345) datasheet here and it sleeps ,and waking up on acceleration event. in our case we set it to almost the minimum possible of 124mg thresh . What happens is that it works great almost for any touch or move that you apply to it by hands, means if its on a table even a tiny move will fire it on . But, when its mounted to a door, or window, if you open the door not quickly enough it just won't wake up . We have tried everything, including taking sensitivity to maximum. Seems that it only needs a change that is not occur on an average door opening . Is there anything you can do to improve this not including migrating to other sensors ? We have a free GPIO connected to interrupt, we can add some part there, that will sense this tiny movement, is there anything like tilt sensor that will do it? Thanks . AI: The first thing to do is to mount the accelerometer as far out on the swinging end (not the hinge end) of the door as possible. Then, of course, the accelerometer axis must be aligned perpendicular to the door. High pass filtering may help, but that requires leaving power on. Ultimately, you may not be able to do get what you want within the constraints you have imposed on yourself. The obvious answer is to use a device appropriate for this application. Not all things are possible just because you can write a spec for them. "I bought a bushel of tomatoes instead of a lug wrench, and now I'm on the side of the road with a flat tire. How do I use these tomatoes to get the tire off. No, I insist on not walking to the hardware store around the corner to get a lug wrench."
H: How to record electrical signal through PC sound card as an audio file? I am doing some study on the frequency of electrical supply. I need to do the analysis on MATLAB. I want to record the electrical signal as an audio file in my laptop. As I am a beginner working with 220V mains, I have a fear of busting my soundcard . I need expert advice on how I can develop my circuit and what type of protective measure should be taken. So far my idea is to use a transformer of 220/12 volts then using a voltage divider circuit. But I actually don't know what is the limit of Line-In voltage for PC soundcard. Up to what extent should I attenuate my signal? AI: xoscope, an open source oscilloscope software which is able to use the sound card as input, has a schematic for a probe buffer. I'm not sure if you need it though, it really depends on the analysis you plan to perform. For a simple frequency measurement it would be definitely an overkill, but if you're going to do spectrum analysis, I'd take a look on it. Note that this probe buffer shares earth with the circuit under test, so you will still need a a galvanic isolation of some sort between mains and your probe. Transformer is one option. If you want to avoid transformers, go for a USB sound card + isolated hub.
H: Calculate VA for a given wattage How to calculate VA given a wattage say for example 300 watts. AI: $$VA = \frac{Watts}{PF}$$ where VA = volt-amps, and PF = power factor. You'll have to find the power factor for the device yourself as you have given so little information.
H: Assistance with CMOS simulation It's been a very long time since I took intro level EE courses. I'm working on a hobby project and as a refresher I figured I would simulate a CMOS inverter. For the life of me I can't figure out why the simulation doesn't produce the expected results. I believe I've faithfully reproduced the circuit from Wikipedia. I've wired it so the drains of the two MOSFETs are tied together. The source of the P-MOSFET is tied to the positive voltage. The source of the N-MOSFET is tied to ground. I ran a simulation that stepped the gate voltage from 0V DC to 1.5V DC. The spec sheet for the MOSFETs say that their threshold voltage is in the range of 0.3V DC to 0.8V DC. The simulation showed pretty much no change in the output voltage. I re-ran the simulation from 0V DC to 12V DC and it produced the graph below. Why is it that the output voltage doesn't change rapidly in the range of the threshold voltage of the MOSFETs? I was expecting it to drop from 12V DC to 0V DC very rapidly after the gate voltage exceeded 0.3V DC. The Circuit Vout, Vmax as a Function of Vgate Over the Range 0V DC to 12V DC Vmax is the green line Vout is the black line Vgate is the blue line AI: I'm just going answer one part of your question that I don't think anybody has answered head-on yet. Why is it that the output voltage doesn't change rapidly in the range of the threshold voltage of the MOSFETs? Say you have 1.5 V on Vgate. Now the NMOS is fully turned on, because you've exceeded the threshold voltage. But what's Vgs of the PMOS? It's still -10.5 V, also far in excess of the threshold voltage. You have to drive Vgate up to near 11 V before the PMOS Vgs gets near its threshold voltage.
H: How to implement programmable Schmitt trigger? I am trying to build a Schmitt trigger that would allow adjusting High and Low threshold values with a microcontroller. simulate this circuit – Schematic created using CircuitLab I could replace R1-R2 with a DAC, but how do I adjust the feedback resistor R3? Is the trigger going to work properly if I replace R3 with a digital potentiometer? AI: Completely replacing R1-R2 with a DAC may give you mixed results because the circuit saves a part by relying on the voltage divider's parallel impedance R1||R2. You'll need to ensure that the DAC has the same output impedance as seen from R3. simulate this circuit – Schematic created using CircuitLab That will adjust the threshold while keeping the hysteresis constant. To adjust the hysteresis, you can now replace R1||R2 and R3 with a digipot.
H: Analyzing a circuit using simple voltage division instead of Thevinin's theorem I can use Thevinin's theorem to simplify R1 and R2, and then use the voltage division rule to get the output. However, I want to do it without using Thevinin's theorem. How to analyze the circuit intuitively, perhaps using some extension of the voltage division rule itself? simulate this circuit – Schematic created using CircuitLab AI: What is intuitive varies from person to person, but you can use the formula for resistors in series and parallel. The voltage at the unlabeled node between R1 and R2 can be calculated by a voltage division using the combination of R2||(R3+R4), where || denotes resistors in parallel. When you know the voltage at the intermediate node, you can use voltage division again using R3 and R4.
H: Magnetometer- there is no low current feature? I have gone over all magnetometers available, most of them using I2C, and connected to some MCU, and i couldn't find even one with interrupt. That means the processor cant go to sleep, and always has to listen to what is going on, there is no interrupt on change option in any of them. Is there something special about them that none of them have an interrupt when a threshold is met? LSM303 MAG3110 HMC5883L How can you build battery operated devices that sense motion when you always have to listen? Accelerometers (that can't detect slow movement) always have some interrupt, but magnetometers do not . Any help/replacement about this issue would be helpful. AI: Accelerometers( that cant detect slowly movement) are always has some interrupt, but magnetometers are not . You can try to detect slow movements (turns) by sampling the magnetometer at a low frequency (e.g. once per second or per minute). This should not consume much energy even for a battery operated device.
H: What is the name/type for this soldering iron tip? I got a old soldering iron, I'm trying to find the replacement tip but I don't know what to search for. When I search soldering iron tip it only show the hakko style tips. Can someone help me identify this tip? AI: There are a few of these available from an Ebay seller: http://www.ebay.com/itm/Ungar-Soldering-Iron-Tips-9015-X2-/281868809328 Says they are for Ungar Models 9911, 9911as, 9370, 9380 and UTC 200 series Irons. Here is another reference: http://www.waveroomplus.com/Vanier-Soldering-Tip-U610--Equivalent-To-WellerUngar-9015_p_2934.html
H: Testing parallel IGBT module A single NPN IGBT chip can usually be tested by a 9V battery between gate and emitter. However opening up a larger module reveals many IGBT modules in parallel. I`ve seen some inverters using several of these in parallel again, together maybe 200 IGBT chips in parallel for a single phase. In the scenarios of only parts of the IGBT have blown. Be it either 199/200 or 5/200 and assuming no visible damage. Is there any way of measuring if this is the case? The 9V battery test would perform as if no problem was present. Opening up the module for inspection ruins the module. simulate this circuit – Schematic created using CircuitLab AI: There are tests you can perform to gauge the "health" of a module. Essentially you need to characterise a number of healthy modules. The minimum number of failed devices essentially comes down to device tolerances and measurement accuracy. What you need to consider is what types of failures are you looking for? shorted gate-emitter. can be detected via G-E impedance shorted collector-emitter. can be detected via C-E impedance open gate-emitter (bondwire, damage...). Can be detected by a change in characteristics etc... There are four characteristics worth determining for the complete module \$C_{iss}\$ \$C_{rss}\$ \$C_{oss}\$ fwd voltage drop. An IGBT die datasheet will usually provide the small signal \$C_{iss}\$, \$C_{rss}\$, \$C_{oss}\$. You cannot just take these values and multiply them by 200 as there will be additional stray capacitance within the module under test. Once these values are measured & captured for a number of healthy modules, you will be able to determine a spread. The number of failed devices you could possibly detect then comes down to the deviation. The loss of one device may still fall within the possible healthy range. 2? 3? once you have some empirical evidence you can come up with a detectable number. The final method is to mount the module on a heatsink and heat the heatsink to a given temperature (50C, 75C). A significant temperature above ambient to mitigate change in ambient when testing on different days. Gate ON the IGBT stacks (9V or the correct 15V) but ensure you measure the gate voltage with reasonable accuracy (not a 9V battery when it is really supplying 8V...). Then with a high current source with a resistive shunt to measure the forward current accurately, apply this source Collector --> Emitter. Measure the Collector->Emitter voltage. The actual current to be applied should be say... 50% of the maximum device current. With the loss of one or many devices this \$V_{ce}\$ will rise.
H: Possibly Incorrect Current Reading I'm not entirely sure if I have done the following equation correct or not, but based on the calculations, my multimeter result seems to be incorrect. I have a very simple circuit, containing a 9V battery, an LED (with 3.2 V forward voltage at 24 mA), and a 270 ohm resistor (as shown below). simulate this circuit – Schematic created using CircuitLab My question is, when I do the calculations to determine what the current should be in the circuit, I do (voltage / resistance = I), so I get (I = 5.8V / 270 = 0.0214 Amps). This is 21 mA, approximately. When I insert my meter into the circuit, (inserting the meter between + and the LED, breaking the circuit and properly measuring amperage) I obtain a reading of about 2.1 mA. Did I do the calculation wrong? I am using a Craftsman 82344 autoranging multimeter. Also, when I insert the multimeter into the circuit, the LED no longer illuminates as it should (at least according to the articles I have studied). Any help would be appreciated, if any more details are needed let me know. AI: Since your circuit has a known resistance in it, you can avoid the meter burden voltage issue by calculating the current from a measurement of the voltage drop across the resistor measured with your meter in voltmeter mode, rather than by inserting the meter into the circuit in ammeter mode. Of course, if your resistor doesn't have the value you think it does (manufacturing tolerance or human error in reading the markings) that determination will be invalid, but you can remove the battery and use your meter in ohm mode on the unpowered circuit to measure the resistor.
H: Which instrument for arbitrary signal commutation (matrix) I would like to test a bunch of sensors. Each sensor has a DB15 connector with mixed digital and analog signals (bw ~5MHz, Z ~120R). I am looking for an instrument where I can control which couple of sensors I would like to interconnect to the measurement PC. I've seen commutation switch such as the Keithley 3706A, but these guys does not accept custom commutation board when I can populate my own connectors and my own relays. So I am thinking to make my own commutation system with a MCU, a FTDI and a bunch of analog switch and relays. I feel this is a very common need in the testing area and I would like to know what commercial solution can be used in my case. AI: If you can get away with analogue switches you should but, if you can't I've designed pretty much what you might need and used pickering reed relays (type 107): - Who cares about reinventing the wheel - wheels work so use 'em. You can stack'em up close - I had about 96 differential input channels that needed testing on a pice of equipment - we used the relays for sequencing thru and injecting signals. They are good for very low level signals (wetting current problems etc..). And, they do a changeover version I've used on a later bit of equipment. They also (and here's the purchasing recommendation!) do modules that are addressable.
H: Currents confusion in a JFET datasheet I am analyzing a JFET. The datasheet has both information about the currents Igss and the maximum of Ig. I know that the Igss is a the reverse current in the PN junction but I can't explain myself what kind of current is Ig and when does it flow? AI: The maximum value (Ig) is when operating (or misoperating more precisely) the gate of an N channel JFET with a positive gate voltage with respect to source - Vgs of an N channel JFET should normally reside between 0V and some negative value (like -10V). Between these values is where the JFET controls the drain current/load. It's basically like an old triode tube/valve - you took the gate negative to control anode current. Putting ancient history/fokelore to one side, if you have Vgs at 0V you have a fully conducting channel i.e. maximum drain current and, if you took Vgs more positive, you'll find there's a parasitic diode that starts conducting hence, the Ig limit is 10mA but, whilever the gate is held at 0V or less, the gate current is in the order of nano amps.
H: What is the point of converting everything to NAND/NOR and how do you do it right? The title pretty much says it all. I know that A' + B' = (AB)' is the basic transformation needed to do so (at least for NAND gates), but whenever I apply this I feel like I'm doing it wrong. For example: C' + AB' + A'BD' Here's what I did: I took - C' + AB' - and made it into - (CA'B)' - which reduced the problem to: (CA'B)' + A'BD' Which further reduced down to: ((CA'B)')'(AB'D)' Is this the right way to do this? Also, why is this form sometimes wanted? It seems more complicated than the original form. AI: The most convenient form of amplifier for use in a gate - because it has high input impedance and useful voltage gain - happens to be inverting. This is true whether it is a BJT (common emitter) or FET (common source) amplifier. Thus a gate formed of a single amplifier stage MUST have an inverting output - that means it can implement any of NAND, NOR, or NOT. (There are a very few exceptions, like ECL, whose lack of gain makes them very intolerant of voltage variations) So if you look at an AND gate - or an OR - you will find a NAND followed by an inverter - or a NOR + inverter. That makes AND not only more expensive and power-hungry than NAND, but also slower. The fact that any combinational boolean expression can be rendered into sum-of-product form (AND then OR), and trivially transformed into NAND-NAND form simply by inverting all the intermediate signals (using DeMorgan to implement the OR function with NAND gates) makes a network of NAND gates incredibly attractive way of implementing it. (Ditto Product-of-Sums, using only NOR gates). simulate this circuit – Schematic created using CircuitLab This shows how AND and OR gates can be implemented using either NAND or NOR technologies (Exhibits A and B). It also shows how a simple expression in SOP form (A AND B) OR C would be implemented if you simply used AND and OR gates formed from NAND blocks. Hopefully it's obvious that all you need to do is delete pairs of inverters to arrive at the final NAND circuit. The result uses only 2 levels of gain instead of 4 if you used AND/OR, so for the price of a little extra thought, your logic is twice as fast.
H: How to Convert 12Volt DC Automotive Battery Charger to 1.5V I'm not an electrical engineer of any sort but was wondering if someone wouldn't mind sharing how I could choke a common AC to DC 12 Volt automotive battery charger down to 1.5V output while retaining the amperage that one of these battery chargers would normally output. A friend suggested to place an inline resistor (on the positive lead I assume) in series to choke it down to 1.5V but I don't know what size resistor to install inline and if anyone wouldn't mind sharing the formula (and / or the component required in this case) I'd be most appreciative and thank you all in advance. Thanks, Stuart Kaufman AI: If you're doing electroplating, then you'll be more concerned with the current into the bath instead of the charger's output voltage, per se, so what you'll need to do will be to adjust the current into the bath until it gets to the current needed for the job. The rub there is that since you can't adjust the charger's output voltage, you have to use some kind of a ballast to drop whatever the charger's voltage is down to whatever's needed to get the current into the tank to be what it has to be for the plating job. If it turns out that you need 1.5 volts into the tank to get, say, 10 amperes through the work, then if your charger's putting out 14 volts the ballast will have to drop 12.5 volts (the charger's output voltage minus the voltage into the tank) at 10 amperes. From Ohm's law, that ballast will have to have a resistance of: $$ R = \frac{E}{I} = \frac{12.5V}{10A} = 1.25\text{ ohms} $$ and it'll have to dissipate: $$P = IE = 12.5V\times10A = 125 \text{ watts} $$ That's a pretty hefty/pricey resistor, and the worst part is that if you need different currents for different jobs, you'll need more than just the one resistor, which is pretty nasty. UPDATE: From your comments, I understand your charger claims it can output either 6 volts at 2 amperes, 12 volts at 2 amperes, or 12 volts at 6 amperes. That's not quite right, because the charger has to output a voltage higher than the battery voltage in order to charge it, so at full charge the 6 volt battery will be at somewhere around 7 volts and the 12 volt battery at around 14. So with that in mind, along with your wish to provide 1.5 volts into the tank, If we model your setup it'll look something like the following, where\$ V1\$ is the output voltage from the charger, \$V2\$ is the input voltage to the tank, \$It\$ is the current into the tank, \$R1\$ is the resistance of the ballast, and \$R2\$ is the resistance of the tank (the load). Now, using Ohm's law, we can do a couple of things, the first being to determine the tank's resistance if we know the voltage across it, \$V2\$, and the current through it, \$It\$. Then, assuming \$V2\$ is 1.5 volts and \$It\$ is 2 amperes, can state: $$ R2=\frac{V2}{It} = \frac{1.5V}{2A} = 0.75 \text{ ohms} $$ The second thing we can do is determine the value of \$R1\$ using Ohm's law as well. Since in a series circuit the current is everywhere the same, the current through \$R1\$ will be \$It\$. The voltage across \$R1\$, however, will be the difference between the charger's output voltage, \$V1\$, and \$V2\$, so to get the value of \$R1\$ we write: $$ R1 = \frac{V1 - V2}{It} = \frac {7V - 1.5V}{2A} = 2.75 \text{ ohms} $$ Finally, to determine the power R1 will dissipate, we write: $$ P = (V1 - V2)It = (7V - 1.5V) \times 2A = 11 \text { watts.}$$ Now, some caveats: The first is that because the conductivity of the electrolyte can vary all over the place, the assumption that 1.5 volts across the bath will always be OK is a dangerous one. What should be done is that the current through the bath be monitored and the voltage varied in order to maintain the current at the desired point. The second is to assume anything about the charger. It would be a very good idea to get ahold of a schematic and find out what's going on in there before you spend a lot of money on resistors or any kind of current-limiting scheme. The third is to always calculate the ballasts's dissipation, determine its temperature rise, and pay attention to the manufacturers' derating curves. The fourth is to realize that while the procedures I've shown you will work in any situation, the numerical values I used were only examples. That is, you should measure everything in your system in order to get meaningful data out.
H: FRDM-KL25z assembly delay loop causes reset I'm currently working on a project using the FRDM-KL25Z development board and programming using Keil MDK-lite (5.14a). What we're supposed to do is create a simple traffic light using three corresponding LEDs and a push button to expedite the light change(not immediately like a real traffic light but to check after each delay). The problem I'm having is my program works perfectly fine in the simulator but when running on the development board it reset itself almost immediately. I've narrowed it down to the loops I'm using for a 30 second delay. Any tips to find out why this is happening or how to find more information in the debugger would be great. Here's a cutout of the loop I'm using. reset LDR R1, =0x00000002 ;Change light to red BL changelight ; LDR R3, =0x00000011 ;Put value into counter (1 loop just to show code works) d30_1 BL buttonpress ;Check for button press SUBS R3, #17 ;Subtract # of ticks in loop (17) from counter CMP R3, #0 BGT d30_1 CMP R6, #1 ;Check for button press BEQ reset ;Reset to red if pressed LDR R1, =0x00000010 ;Change light to green BL changelight ; LDR R3, =0x05B8d800 ;Put value into counter (5 seconds, the board resets when counter is this high) d30_2 BL buttonpress ;Check for button press SUBS R3, #17 ;Subtract # of ticks in loop (17) from counter CMP R3, #0 BGT d30_2 ... Here is the branch buttonpress buttonpress LDR R0, =0x400FF090 ;Put address of PORTC_PDIR into R0 LDR R1, [R0] ;Put value of PORTC_PDIR into R1 LDR R0, =0x00000080 ;Put value of monitored input pin TST R1, R0 ;Check for button press BNE nopress ;Break from process if button not pressed MOVS R6, #1 ;Put 1 in R6 if button has been pressed nopress BX LR AI: I checked the Reset Control Module (RCM) while debugging. The cause of the reset can be attributed to the watchdog timer Computer Operating Properly (COP) timing out. I added the following to my initialization to solve the problem. Thank you guys! ;Disable watchdog COP timer LDR R0, =SIM_COPC ;Load address of SIM_COPC to R0 LDR R1, =0x0 ;Disable watchdog COPT STR R1, [R0] ;
H: Is Variable synthesizable (VHDL)? What I know is Variable has no hardware representation. So does it comes under the category of Non-synthesizable codes in VHDL? AI: How is a VHDL variable synthesized by synthesis tools has some useful information though it's not exactly a duplicate of your question IMO. Just because something doesn't have a direct hardware representation doesn't mean it's not synthesizable. When writing VHDL (and Verilog) for synthesis you typically write program code that defines what happens on each clock cycle. You can also write code that takes input signals and produces outputs from them as soon as any input changes without involving a clock. The synthesis tool turns that block of code into a combination of logic operations (gates, multiplexers, adders, multipliers, etc which will later be mapped onto target specific resources) and registers. Logically that block of code can be though of as completing "instantly". How does it do that? Well first off any loops are fully unrolled (you will get errors if you have loops with a number of iterations that can't be statically determined). Some variables (i.e. loop counters) are likely to disappear at this point. Then the code is turned into a data flow form. The variables at this point are essentially labels telling the compiler what output feeds into what input. Each variable will become one or more signals representing it's value at different points in the code. "if" statements and similar become multiplexers selecting which version of a signal should feed into later logic and/or registers. If there is a path through the code where a given variable is read before being set and/or a path where the variable is not set at all then the final value of a variable from one pass through the code must be fed to the input of the next. How exactly that happens depends on when the code is run. If the code runs on a clock edge then a clocked register will be created to feed the final value from one iteration to the initial value for the next. This is fine, the tools can handle it just like any other register. If the code is not clock triggered then the output value will be fed directly back to the input creating what is essentially a "transparent latch". This is usually a bad thing! Transparent latches are very sensitive to input glitches and are difficult for timing analysers to handle. While most synthsis tools will synthesize this the results may well not match with simulation results and may be unredictable.
H: Nixie Tube Voltage Safety I have seen clock designs like this one: where the nixie tube leads are not fully inclosed. There is likely 145-170V on these lines. I am just wondering if it's safe to allow the leads to be exposed and what sort of consequences there might be if someone accidentally touches them. AI: That sort of exposed circuitry is not a real good idea. That said, as long as the HV is supplied by an isolated source such as a transformer, it's probably not a usually-lethal threat. The reason is that the HV is carried on conductors which are close together, and the most likely accident will involve a current path through the fingers of one hand. At 170 volts, this will be painful, and possibly result in burns, but there will be no path through something critical like the heart. Of course, there's always the possibility of grabbing the board with both hands and getting unlucky. The problem with making things foolproof is that fools can be so very clever.
H: Unity gain op amp gives static, incorrect output I have the circuit shown here: where I'm using a switching regulator (not the linear 78xx shown) to provide me with 5V to power an Arduino and a little more circuitry that the Arduino is monitoring. The Arduino is measuring (and logging) the voltage at A - ie I'm logging the input power supply voltage (which varies ~12.7 > 14V according to the solar charging the 12V batteries). Without the op amp (ie measuring the voltage directly from the potential divider [and with resistor values ~1/2 of that shown]) the voltage logged is as expected. I want the unity gain op amp so as I can increase the resistor values to reduce the current drawn. However with the op amp in place the output (A) is a static ~3.74V. I measured this directly at a time when the voltage to the +ve op amp input was ~3.85V, and I also know from reviewing the logging that the voltage on the output of the op amp was static throughout the previous day*. I'm probably making some rookie mistakes, I'm new to the idea of an op-amp as a buffer - in this case I'm using an LM358 (datasheet pdf) as it's what I have available - I'm very open to suggestions of a more appropriate amplifier. While I could get away without it in this case, I do want to minimise the current drawn here, and I will also be logging the solar voltage (~70V) so will definitely want > 10k resistors in that potential divider and will therefore need a buffer. *the voltage is logged whenever the scaled measured voltage varies by 0.05V (ie battery voltage goes 13.1 -> 13.15). Without the op amp this logged ~1000 entries in a day. With the op amp it logged half a dozen, and these varied only within ~12.4-12.5V (despite the solar MPPT charger reporting the day max of 14V). AI: You are powering your LM358 from a 5V power rail: - It only has a 5V power rail Input common mode range is from 0V to Vcc-1.5 i.e. 0V to 3.5V Your input is 12V *10/32 = 3.75 volts Output range is about 0V to Vcc-2V i.e. 0V to 3V Read the data sheet and don't expect miracles.
H: normally open or normally closed OptoMos In the following OptoMos datasheet the OptoMos is defined as Normally closed, when there is no input the output is closed. But in other OptoMos like the SFH618A-2 i can't find if the OptoMos is NO or NC. AI: The SFH 618A is turned on by LED current - the symbol of the device is a phototransistor and this can only be turned on by light. A good clue is that this device talks about CTR i.e. current transfer ratio i.e. more LED current means more collector current through the phototransistor. The OptoMos device appears to be a solid state relay but designed so that light turns the device off.
H: Modern jack-of-all-trades cheap OPAMPs? I am trying to design my first "big" project, and I have to choose the OPAMPs to use. The problem is that I am feeling overwhelmed by all the possibilities. I feel tempted to just stick on the LM741 or LM358, but after reading around (i.e. there http://www.electro-tech-online.com/threads/lM741-versus-lM386.91454/), I see that a lot of people say that there are modern opamps available at approximately the same price with simply better specs. All of this to say: which are the modern jack-of-all-trades cheap opamps that can work in a lot of different applications? I am asking this here because all I have found is about audio, not about electronics in general. I need these information for a signal conditioning circuit. I have to work in a bandwidth similar to audio (0-40kHz), and I need low offset (around 5-7 mV) and low noise (around 5mV at 10kHz)(by the way, is this really "low" noise or it is just normal noise?). I have to pilot an ADC converter (so no need for high currents). If you need more information please let me know. Thanks in advance to anyone that will answer! Edit1: Changed the "low offset" to 5-7 mV and rephrased the "low power" requirement since it was misleading. Also, after reading all your answers, I am starting to think that the TL054 or maybe the TL052 would work nicely in my project (since i need to use 2 unity-gain buffers and 2 unity-gain inverters). AI: "All of this to say: which are the modern jack-of-all-trades cheap opamps that can work in a lot of different applications?" I would say: there is none ! Although all opamps are in fact just opamps, you noticed that there are a lot of them, there must be a reason for that. The reason is that they all have their own purpose and are designed with a certain range of applications in mind. You say you want a "jack-of-all-trades" but later you mention: "a signal conditioning circuit. I have to work in a bandwidth similar to audio (0-40kHz), and I need low noise and low offset (around 1 mV), and also low power." So a "jack-of-all-trades" will not do ! Low noise audio: I'm thinking NE5532 Low noise AND low power usually contradict so you'll have to compromise, what is acceptable ? Cheap also ? Hmm, that makes it even more difficult. You will have to search for yourself. You can play with your requirements on Digikey or Farnell, sort on price and see what you get.